Test Report: KVM_Linux 19690

                    
                      f8db61c9b74e1fc8d4208c01add19855c5953b45:2024-09-23:36339
                    
                

Test fail (2/340)

Order failed test Duration
33 TestAddons/parallel/Registry 73.64
241 TestMultiNode/serial/RestartMultiNode 154.65
x
+
TestAddons/parallel/Registry (73.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.049752ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-wc269" [93368544-2bdd-4676-901f-cc2b1f4cfa8a] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007521323s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-49g5s" [da46be15-7046-4070-9862-00f5586a04c9] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004522063s
addons_test.go:338: (dbg) Run:  kubectl --context addons-825629 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-825629 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-825629 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.088571102s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-825629 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 ip
2024/09/23 12:06:47 [DEBUG] GET http://192.168.39.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-825629 -n addons-825629
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-535633                                                                     | download-only-535633 | jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	| delete  | -p download-only-569713                                                                     | download-only-569713 | jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-304120 | jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | binary-mirror-304120                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41975                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-304120                                                                     | binary-mirror-304120 | jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:53 UTC |
	| addons  | enable dashboard -p                                                                         | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | addons-825629                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 11:53 UTC |                     |
	|         | addons-825629                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-825629 --wait=true                                                                | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 11:53 UTC | 23 Sep 24 11:56 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2  --addons=ingress                                                             |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825629 addons disable                                                                | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 11:57 UTC | 23 Sep 24 11:57 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:05 UTC | 23 Sep 24 12:05 UTC |
	|         | -p addons-825629                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:05 UTC | 23 Sep 24 12:05 UTC |
	|         | -p addons-825629                                                                            |                      |         |         |                     |                     |
	| addons  | addons-825629 addons disable                                                                | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:05 UTC | 23 Sep 24 12:05 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-825629 addons disable                                                                | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:05 UTC | 23 Sep 24 12:05 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825629 addons                                                                        | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:05 UTC | 23 Sep 24 12:05 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:05 UTC | 23 Sep 24 12:06 UTC |
	|         | addons-825629                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-825629 ssh curl -s                                                                   | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC | 23 Sep 24 12:06 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-825629 ip                                                                            | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC | 23 Sep 24 12:06 UTC |
	| addons  | addons-825629 addons disable                                                                | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC | 23 Sep 24 12:06 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-825629 addons disable                                                                | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC | 23 Sep 24 12:06 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ssh     | addons-825629 ssh cat                                                                       | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC | 23 Sep 24 12:06 UTC |
	|         | /opt/local-path-provisioner/pvc-2838b3b1-f740-472d-b374-6eb70574df74_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-825629 addons disable                                                                | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC | 23 Sep 24 12:06 UTC |
	|         | addons-825629                                                                               |                      |         |         |                     |                     |
	| addons  | addons-825629 addons                                                                        | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC | 23 Sep 24 12:06 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-825629 addons                                                                        | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC | 23 Sep 24 12:06 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-825629 ip                                                                            | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC | 23 Sep 24 12:06 UTC |
	| addons  | addons-825629 addons disable                                                                | addons-825629        | jenkins | v1.34.0 | 23 Sep 24 12:06 UTC | 23 Sep 24 12:06 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:53:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:53:11.107693  505701 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:53:11.107824  505701 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:53:11.107835  505701 out.go:358] Setting ErrFile to fd 2...
	I0923 11:53:11.107842  505701 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:53:11.108145  505701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	I0923 11:53:11.108926  505701 out.go:352] Setting JSON to false
	I0923 11:53:11.110212  505701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5733,"bootTime":1727086658,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:53:11.110350  505701 start.go:139] virtualization: kvm guest
	I0923 11:53:11.112274  505701 out.go:177] * [addons-825629] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:53:11.113622  505701 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 11:53:11.113678  505701 notify.go:220] Checking for updates...
	I0923 11:53:11.116060  505701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:53:11.117400  505701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 11:53:11.118625  505701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	I0923 11:53:11.119959  505701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:53:11.121155  505701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:53:11.122375  505701 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:53:11.155905  505701 out.go:177] * Using the kvm2 driver based on user configuration
	I0923 11:53:11.157090  505701 start.go:297] selected driver: kvm2
	I0923 11:53:11.157104  505701 start.go:901] validating driver "kvm2" against <nil>
	I0923 11:53:11.157116  505701 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:53:11.157832  505701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:53:11.157906  505701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-497735/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 11:53:11.174097  505701 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 11:53:11.174159  505701 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:53:11.174742  505701 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:53:11.174845  505701 cni.go:84] Creating CNI manager for ""
	I0923 11:53:11.174931  505701 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:53:11.174953  505701 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 11:53:11.175087  505701 start.go:340] cluster config:
	{Name:addons-825629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-825629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:53:11.175303  505701 iso.go:125] acquiring lock: {Name:mkc30b88bda541d89938b3c13430927ceb85d23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:53:11.177282  505701 out.go:177] * Starting "addons-825629" primary control-plane node in "addons-825629" cluster
	I0923 11:53:11.178900  505701 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:53:11.178952  505701 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:53:11.178962  505701 cache.go:56] Caching tarball of preloaded images
	I0923 11:53:11.179058  505701 preload.go:172] Found /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 11:53:11.179072  505701 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 11:53:11.179406  505701 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/config.json ...
	I0923 11:53:11.179431  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/config.json: {Name:mkafe2336026e90df3619e8e065f14c1fb2c7757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:11.179583  505701 start.go:360] acquireMachinesLock for addons-825629: {Name:mk9742766ed80b377dab18455a5851b42572655c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:53:11.179630  505701 start.go:364] duration metric: took 32.508µs to acquireMachinesLock for "addons-825629"
	I0923 11:53:11.179647  505701 start.go:93] Provisioning new machine with config: &{Name:addons-825629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-825629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:53:11.179715  505701 start.go:125] createHost starting for "" (driver="kvm2")
	I0923 11:53:11.181430  505701 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0923 11:53:11.181588  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:53:11.181621  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:53:11.196706  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34685
	I0923 11:53:11.197316  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:53:11.197987  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:53:11.198018  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:53:11.198386  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:53:11.198673  505701 main.go:141] libmachine: (addons-825629) Calling .GetMachineName
	I0923 11:53:11.198841  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:53:11.198999  505701 start.go:159] libmachine.API.Create for "addons-825629" (driver="kvm2")
	I0923 11:53:11.199028  505701 client.go:168] LocalClient.Create starting
	I0923 11:53:11.199088  505701 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem
	I0923 11:53:11.444699  505701 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem
	I0923 11:53:11.647543  505701 main.go:141] libmachine: Running pre-create checks...
	I0923 11:53:11.647577  505701 main.go:141] libmachine: (addons-825629) Calling .PreCreateCheck
	I0923 11:53:11.648104  505701 main.go:141] libmachine: (addons-825629) Calling .GetConfigRaw
	I0923 11:53:11.648646  505701 main.go:141] libmachine: Creating machine...
	I0923 11:53:11.648668  505701 main.go:141] libmachine: (addons-825629) Calling .Create
	I0923 11:53:11.648855  505701 main.go:141] libmachine: (addons-825629) Creating KVM machine...
	I0923 11:53:11.650422  505701 main.go:141] libmachine: (addons-825629) DBG | found existing default KVM network
	I0923 11:53:11.651311  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:11.651154  505724 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211f0}
	I0923 11:53:11.651348  505701 main.go:141] libmachine: (addons-825629) DBG | created network xml: 
	I0923 11:53:11.651364  505701 main.go:141] libmachine: (addons-825629) DBG | <network>
	I0923 11:53:11.651424  505701 main.go:141] libmachine: (addons-825629) DBG |   <name>mk-addons-825629</name>
	I0923 11:53:11.651455  505701 main.go:141] libmachine: (addons-825629) DBG |   <dns enable='no'/>
	I0923 11:53:11.651468  505701 main.go:141] libmachine: (addons-825629) DBG |   
	I0923 11:53:11.651483  505701 main.go:141] libmachine: (addons-825629) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0923 11:53:11.651496  505701 main.go:141] libmachine: (addons-825629) DBG |     <dhcp>
	I0923 11:53:11.651507  505701 main.go:141] libmachine: (addons-825629) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0923 11:53:11.651518  505701 main.go:141] libmachine: (addons-825629) DBG |     </dhcp>
	I0923 11:53:11.651531  505701 main.go:141] libmachine: (addons-825629) DBG |   </ip>
	I0923 11:53:11.651542  505701 main.go:141] libmachine: (addons-825629) DBG |   
	I0923 11:53:11.651552  505701 main.go:141] libmachine: (addons-825629) DBG | </network>
	I0923 11:53:11.651564  505701 main.go:141] libmachine: (addons-825629) DBG | 
	I0923 11:53:11.656921  505701 main.go:141] libmachine: (addons-825629) DBG | trying to create private KVM network mk-addons-825629 192.168.39.0/24...
	I0923 11:53:11.725984  505701 main.go:141] libmachine: (addons-825629) Setting up store path in /home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629 ...
	I0923 11:53:11.726018  505701 main.go:141] libmachine: (addons-825629) DBG | private KVM network mk-addons-825629 192.168.39.0/24 created
	I0923 11:53:11.726045  505701 main.go:141] libmachine: (addons-825629) Building disk image from file:///home/jenkins/minikube-integration/19690-497735/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 11:53:11.726070  505701 main.go:141] libmachine: (addons-825629) Downloading /home/jenkins/minikube-integration/19690-497735/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19690-497735/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0923 11:53:11.726087  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:11.725912  505724 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19690-497735/.minikube
	I0923 11:53:11.997391  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:11.997223  505724 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa...
	I0923 11:53:12.068238  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:12.068079  505724 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/addons-825629.rawdisk...
	I0923 11:53:12.068269  505701 main.go:141] libmachine: (addons-825629) DBG | Writing magic tar header
	I0923 11:53:12.068279  505701 main.go:141] libmachine: (addons-825629) DBG | Writing SSH key tar header
	I0923 11:53:12.068287  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:12.068218  505724 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629 ...
	I0923 11:53:12.068300  505701 main.go:141] libmachine: (addons-825629) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629
	I0923 11:53:12.068404  505701 main.go:141] libmachine: (addons-825629) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-497735/.minikube/machines
	I0923 11:53:12.068425  505701 main.go:141] libmachine: (addons-825629) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-497735/.minikube
	I0923 11:53:12.068436  505701 main.go:141] libmachine: (addons-825629) Setting executable bit set on /home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629 (perms=drwx------)
	I0923 11:53:12.068452  505701 main.go:141] libmachine: (addons-825629) Setting executable bit set on /home/jenkins/minikube-integration/19690-497735/.minikube/machines (perms=drwxr-xr-x)
	I0923 11:53:12.068462  505701 main.go:141] libmachine: (addons-825629) Setting executable bit set on /home/jenkins/minikube-integration/19690-497735/.minikube (perms=drwxr-xr-x)
	I0923 11:53:12.068472  505701 main.go:141] libmachine: (addons-825629) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19690-497735
	I0923 11:53:12.068489  505701 main.go:141] libmachine: (addons-825629) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0923 11:53:12.068500  505701 main.go:141] libmachine: (addons-825629) DBG | Checking permissions on dir: /home/jenkins
	I0923 11:53:12.068524  505701 main.go:141] libmachine: (addons-825629) Setting executable bit set on /home/jenkins/minikube-integration/19690-497735 (perms=drwxrwxr-x)
	I0923 11:53:12.068547  505701 main.go:141] libmachine: (addons-825629) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0923 11:53:12.068554  505701 main.go:141] libmachine: (addons-825629) DBG | Checking permissions on dir: /home
	I0923 11:53:12.068565  505701 main.go:141] libmachine: (addons-825629) DBG | Skipping /home - not owner
	I0923 11:53:12.068571  505701 main.go:141] libmachine: (addons-825629) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0923 11:53:12.068579  505701 main.go:141] libmachine: (addons-825629) Creating domain...
	I0923 11:53:12.069824  505701 main.go:141] libmachine: (addons-825629) define libvirt domain using xml: 
	I0923 11:53:12.069861  505701 main.go:141] libmachine: (addons-825629) <domain type='kvm'>
	I0923 11:53:12.069871  505701 main.go:141] libmachine: (addons-825629)   <name>addons-825629</name>
	I0923 11:53:12.069878  505701 main.go:141] libmachine: (addons-825629)   <memory unit='MiB'>4000</memory>
	I0923 11:53:12.069885  505701 main.go:141] libmachine: (addons-825629)   <vcpu>2</vcpu>
	I0923 11:53:12.069891  505701 main.go:141] libmachine: (addons-825629)   <features>
	I0923 11:53:12.069899  505701 main.go:141] libmachine: (addons-825629)     <acpi/>
	I0923 11:53:12.069905  505701 main.go:141] libmachine: (addons-825629)     <apic/>
	I0923 11:53:12.069915  505701 main.go:141] libmachine: (addons-825629)     <pae/>
	I0923 11:53:12.069920  505701 main.go:141] libmachine: (addons-825629)     
	I0923 11:53:12.069931  505701 main.go:141] libmachine: (addons-825629)   </features>
	I0923 11:53:12.069939  505701 main.go:141] libmachine: (addons-825629)   <cpu mode='host-passthrough'>
	I0923 11:53:12.069949  505701 main.go:141] libmachine: (addons-825629)   
	I0923 11:53:12.069967  505701 main.go:141] libmachine: (addons-825629)   </cpu>
	I0923 11:53:12.069977  505701 main.go:141] libmachine: (addons-825629)   <os>
	I0923 11:53:12.069987  505701 main.go:141] libmachine: (addons-825629)     <type>hvm</type>
	I0923 11:53:12.069998  505701 main.go:141] libmachine: (addons-825629)     <boot dev='cdrom'/>
	I0923 11:53:12.070007  505701 main.go:141] libmachine: (addons-825629)     <boot dev='hd'/>
	I0923 11:53:12.070015  505701 main.go:141] libmachine: (addons-825629)     <bootmenu enable='no'/>
	I0923 11:53:12.070024  505701 main.go:141] libmachine: (addons-825629)   </os>
	I0923 11:53:12.070044  505701 main.go:141] libmachine: (addons-825629)   <devices>
	I0923 11:53:12.070057  505701 main.go:141] libmachine: (addons-825629)     <disk type='file' device='cdrom'>
	I0923 11:53:12.070065  505701 main.go:141] libmachine: (addons-825629)       <source file='/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/boot2docker.iso'/>
	I0923 11:53:12.070072  505701 main.go:141] libmachine: (addons-825629)       <target dev='hdc' bus='scsi'/>
	I0923 11:53:12.070077  505701 main.go:141] libmachine: (addons-825629)       <readonly/>
	I0923 11:53:12.070106  505701 main.go:141] libmachine: (addons-825629)     </disk>
	I0923 11:53:12.070119  505701 main.go:141] libmachine: (addons-825629)     <disk type='file' device='disk'>
	I0923 11:53:12.070132  505701 main.go:141] libmachine: (addons-825629)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0923 11:53:12.070142  505701 main.go:141] libmachine: (addons-825629)       <source file='/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/addons-825629.rawdisk'/>
	I0923 11:53:12.070149  505701 main.go:141] libmachine: (addons-825629)       <target dev='hda' bus='virtio'/>
	I0923 11:53:12.070154  505701 main.go:141] libmachine: (addons-825629)     </disk>
	I0923 11:53:12.070161  505701 main.go:141] libmachine: (addons-825629)     <interface type='network'>
	I0923 11:53:12.070166  505701 main.go:141] libmachine: (addons-825629)       <source network='mk-addons-825629'/>
	I0923 11:53:12.070176  505701 main.go:141] libmachine: (addons-825629)       <model type='virtio'/>
	I0923 11:53:12.070181  505701 main.go:141] libmachine: (addons-825629)     </interface>
	I0923 11:53:12.070190  505701 main.go:141] libmachine: (addons-825629)     <interface type='network'>
	I0923 11:53:12.070195  505701 main.go:141] libmachine: (addons-825629)       <source network='default'/>
	I0923 11:53:12.070200  505701 main.go:141] libmachine: (addons-825629)       <model type='virtio'/>
	I0923 11:53:12.070204  505701 main.go:141] libmachine: (addons-825629)     </interface>
	I0923 11:53:12.070211  505701 main.go:141] libmachine: (addons-825629)     <serial type='pty'>
	I0923 11:53:12.070216  505701 main.go:141] libmachine: (addons-825629)       <target port='0'/>
	I0923 11:53:12.070222  505701 main.go:141] libmachine: (addons-825629)     </serial>
	I0923 11:53:12.070227  505701 main.go:141] libmachine: (addons-825629)     <console type='pty'>
	I0923 11:53:12.070237  505701 main.go:141] libmachine: (addons-825629)       <target type='serial' port='0'/>
	I0923 11:53:12.070244  505701 main.go:141] libmachine: (addons-825629)     </console>
	I0923 11:53:12.070248  505701 main.go:141] libmachine: (addons-825629)     <rng model='virtio'>
	I0923 11:53:12.070256  505701 main.go:141] libmachine: (addons-825629)       <backend model='random'>/dev/random</backend>
	I0923 11:53:12.070260  505701 main.go:141] libmachine: (addons-825629)     </rng>
	I0923 11:53:12.070294  505701 main.go:141] libmachine: (addons-825629)     
	I0923 11:53:12.070314  505701 main.go:141] libmachine: (addons-825629)     
	I0923 11:53:12.070335  505701 main.go:141] libmachine: (addons-825629)   </devices>
	I0923 11:53:12.070344  505701 main.go:141] libmachine: (addons-825629) </domain>
	I0923 11:53:12.070356  505701 main.go:141] libmachine: (addons-825629) 
	I0923 11:53:12.129079  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:e4:c7:3f in network default
	I0923 11:53:12.129802  505701 main.go:141] libmachine: (addons-825629) Ensuring networks are active...
	I0923 11:53:12.129830  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:12.130679  505701 main.go:141] libmachine: (addons-825629) Ensuring network default is active
	I0923 11:53:12.131167  505701 main.go:141] libmachine: (addons-825629) Ensuring network mk-addons-825629 is active
	I0923 11:53:12.131796  505701 main.go:141] libmachine: (addons-825629) Getting domain xml...
	I0923 11:53:12.132653  505701 main.go:141] libmachine: (addons-825629) Creating domain...
	I0923 11:53:13.510476  505701 main.go:141] libmachine: (addons-825629) Waiting to get IP...
	I0923 11:53:13.511286  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:13.511696  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:13.511847  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:13.511772  505724 retry.go:31] will retry after 235.197863ms: waiting for machine to come up
	I0923 11:53:13.748223  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:13.748623  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:13.748653  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:13.748577  505724 retry.go:31] will retry after 315.393137ms: waiting for machine to come up
	I0923 11:53:14.065307  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:14.065759  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:14.065787  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:14.065707  505724 retry.go:31] will retry after 356.235094ms: waiting for machine to come up
	I0923 11:53:14.423111  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:14.423510  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:14.423535  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:14.423458  505724 retry.go:31] will retry after 450.303274ms: waiting for machine to come up
	I0923 11:53:14.875266  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:14.875739  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:14.875768  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:14.875683  505724 retry.go:31] will retry after 697.914842ms: waiting for machine to come up
	I0923 11:53:15.575640  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:15.576061  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:15.576091  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:15.576005  505724 retry.go:31] will retry after 805.933039ms: waiting for machine to come up
	I0923 11:53:16.383834  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:16.384328  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:16.384359  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:16.384272  505724 retry.go:31] will retry after 774.251405ms: waiting for machine to come up
	I0923 11:53:17.159964  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:17.160462  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:17.160487  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:17.160407  505724 retry.go:31] will retry after 1.1049223s: waiting for machine to come up
	I0923 11:53:18.266823  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:18.267253  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:18.267285  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:18.267201  505724 retry.go:31] will retry after 1.623906363s: waiting for machine to come up
	I0923 11:53:19.892641  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:19.893083  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:19.893108  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:19.893035  505724 retry.go:31] will retry after 2.090229974s: waiting for machine to come up
	I0923 11:53:21.985307  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:21.985779  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:21.985809  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:21.985727  505724 retry.go:31] will retry after 2.151955025s: waiting for machine to come up
	I0923 11:53:24.140219  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:24.140721  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:24.140744  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:24.140665  505724 retry.go:31] will retry after 2.769301136s: waiting for machine to come up
	I0923 11:53:26.911279  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:26.911662  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find current IP address of domain addons-825629 in network mk-addons-825629
	I0923 11:53:26.911697  505701 main.go:141] libmachine: (addons-825629) DBG | I0923 11:53:26.911596  505724 retry.go:31] will retry after 4.401896949s: waiting for machine to come up
	I0923 11:53:31.315307  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.315823  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has current primary IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.315874  505701 main.go:141] libmachine: (addons-825629) Found IP for machine: 192.168.39.2
	I0923 11:53:31.315891  505701 main.go:141] libmachine: (addons-825629) Reserving static IP address...
	I0923 11:53:31.316449  505701 main.go:141] libmachine: (addons-825629) DBG | unable to find host DHCP lease matching {name: "addons-825629", mac: "52:54:00:cc:e2:9c", ip: "192.168.39.2"} in network mk-addons-825629
	I0923 11:53:31.396226  505701 main.go:141] libmachine: (addons-825629) DBG | Getting to WaitForSSH function...
	I0923 11:53:31.396253  505701 main.go:141] libmachine: (addons-825629) Reserved static IP address: 192.168.39.2
	I0923 11:53:31.396321  505701 main.go:141] libmachine: (addons-825629) Waiting for SSH to be available...
	I0923 11:53:31.398845  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.399312  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:31.399348  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.399533  505701 main.go:141] libmachine: (addons-825629) DBG | Using SSH client type: external
	I0923 11:53:31.399561  505701 main.go:141] libmachine: (addons-825629) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa (-rw-------)
	I0923 11:53:31.399590  505701 main.go:141] libmachine: (addons-825629) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 11:53:31.399609  505701 main.go:141] libmachine: (addons-825629) DBG | About to run SSH command:
	I0923 11:53:31.399623  505701 main.go:141] libmachine: (addons-825629) DBG | exit 0
	I0923 11:53:31.526789  505701 main.go:141] libmachine: (addons-825629) DBG | SSH cmd err, output: <nil>: 
	I0923 11:53:31.527023  505701 main.go:141] libmachine: (addons-825629) KVM machine creation complete!
	I0923 11:53:31.527415  505701 main.go:141] libmachine: (addons-825629) Calling .GetConfigRaw
	I0923 11:53:31.527993  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:53:31.528166  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:53:31.528343  505701 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0923 11:53:31.528356  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:53:31.529664  505701 main.go:141] libmachine: Detecting operating system of created instance...
	I0923 11:53:31.529679  505701 main.go:141] libmachine: Waiting for SSH to be available...
	I0923 11:53:31.529684  505701 main.go:141] libmachine: Getting to WaitForSSH function...
	I0923 11:53:31.529689  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:31.532081  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.532542  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:31.532575  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.532712  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:31.532895  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:31.533050  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:31.533174  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:31.533349  505701 main.go:141] libmachine: Using SSH client type: native
	I0923 11:53:31.533524  505701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0923 11:53:31.533536  505701 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0923 11:53:31.638061  505701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:53:31.638090  505701 main.go:141] libmachine: Detecting the provisioner...
	I0923 11:53:31.638101  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:31.641326  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.641795  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:31.641836  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.642031  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:31.642247  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:31.642440  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:31.642570  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:31.642711  505701 main.go:141] libmachine: Using SSH client type: native
	I0923 11:53:31.642923  505701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0923 11:53:31.642936  505701 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0923 11:53:31.751264  505701 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0923 11:53:31.751363  505701 main.go:141] libmachine: found compatible host: buildroot
	I0923 11:53:31.751370  505701 main.go:141] libmachine: Provisioning with buildroot...
	I0923 11:53:31.751378  505701 main.go:141] libmachine: (addons-825629) Calling .GetMachineName
	I0923 11:53:31.751649  505701 buildroot.go:166] provisioning hostname "addons-825629"
	I0923 11:53:31.751681  505701 main.go:141] libmachine: (addons-825629) Calling .GetMachineName
	I0923 11:53:31.752127  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:31.754862  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.755208  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:31.755239  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.755419  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:31.755599  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:31.755734  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:31.755831  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:31.755978  505701 main.go:141] libmachine: Using SSH client type: native
	I0923 11:53:31.756171  505701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0923 11:53:31.756187  505701 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-825629 && echo "addons-825629" | sudo tee /etc/hostname
	I0923 11:53:31.877592  505701 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-825629
	
	I0923 11:53:31.877630  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:31.880769  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.881072  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:31.881104  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.881318  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:31.881520  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:31.882071  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:31.882280  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:31.882819  505701 main.go:141] libmachine: Using SSH client type: native
	I0923 11:53:31.883012  505701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0923 11:53:31.883034  505701 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-825629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-825629/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-825629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:53:31.994630  505701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:53:31.994670  505701 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-497735/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-497735/.minikube}
	I0923 11:53:31.994704  505701 buildroot.go:174] setting up certificates
	I0923 11:53:31.994738  505701 provision.go:84] configureAuth start
	I0923 11:53:31.994785  505701 main.go:141] libmachine: (addons-825629) Calling .GetMachineName
	I0923 11:53:31.995159  505701 main.go:141] libmachine: (addons-825629) Calling .GetIP
	I0923 11:53:31.998219  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.998576  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:31.998605  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:31.998803  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:32.001315  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:32.001623  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:32.001651  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:32.001822  505701 provision.go:143] copyHostCerts
	I0923 11:53:32.001905  505701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem (1078 bytes)
	I0923 11:53:32.002019  505701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem (1123 bytes)
	I0923 11:53:32.002090  505701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem (1679 bytes)
	I0923 11:53:32.002144  505701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem org=jenkins.addons-825629 san=[127.0.0.1 192.168.39.2 addons-825629 localhost minikube]
	I0923 11:53:32.175351  505701 provision.go:177] copyRemoteCerts
	I0923 11:53:32.175414  505701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:53:32.175441  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:32.178117  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:32.178499  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:32.178522  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:32.178775  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:32.178987  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:32.179149  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:32.179277  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:53:32.260492  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 11:53:32.284770  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 11:53:32.308351  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 11:53:32.331624  505701 provision.go:87] duration metric: took 336.86472ms to configureAuth
	I0923 11:53:32.331654  505701 buildroot.go:189] setting minikube options for container-runtime
	I0923 11:53:32.331843  505701 config.go:182] Loaded profile config "addons-825629": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:53:32.331873  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:53:32.332202  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:32.334963  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:32.335353  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:32.335377  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:32.335572  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:32.335778  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:32.335965  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:32.336141  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:32.336313  505701 main.go:141] libmachine: Using SSH client type: native
	I0923 11:53:32.336487  505701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0923 11:53:32.336498  505701 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 11:53:32.444397  505701 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 11:53:32.444426  505701 buildroot.go:70] root file system type: tmpfs
	I0923 11:53:32.444543  505701 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 11:53:32.444562  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:32.447283  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:32.447597  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:32.447625  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:32.447802  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:32.447989  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:32.448140  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:32.448268  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:32.448417  505701 main.go:141] libmachine: Using SSH client type: native
	I0923 11:53:32.448630  505701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0923 11:53:32.448727  505701 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 11:53:32.568142  505701 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 11:53:32.568197  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:32.570820  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:32.571161  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:32.571190  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:32.571333  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:32.571531  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:32.571700  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:32.571824  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:32.571978  505701 main.go:141] libmachine: Using SSH client type: native
	I0923 11:53:32.572148  505701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0923 11:53:32.572164  505701 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 11:53:35.082542  505701 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 11:53:35.082577  505701 main.go:141] libmachine: Checking connection to Docker...
	I0923 11:53:35.082593  505701 main.go:141] libmachine: (addons-825629) Calling .GetURL
	I0923 11:53:35.083870  505701 main.go:141] libmachine: (addons-825629) DBG | Using libvirt version 6000000
	I0923 11:53:35.086228  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.086568  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:35.086599  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.086778  505701 main.go:141] libmachine: Docker is up and running!
	I0923 11:53:35.086797  505701 main.go:141] libmachine: Reticulating splines...
	I0923 11:53:35.086806  505701 client.go:171] duration metric: took 23.887769461s to LocalClient.Create
	I0923 11:53:35.086832  505701 start.go:167] duration metric: took 23.887836786s to libmachine.API.Create "addons-825629"
	I0923 11:53:35.086841  505701 start.go:293] postStartSetup for "addons-825629" (driver="kvm2")
	I0923 11:53:35.086853  505701 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:53:35.086887  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:53:35.087143  505701 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:53:35.087169  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:35.089155  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.089525  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:35.089545  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.089736  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:35.089915  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:35.090067  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:35.090200  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:53:35.172504  505701 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:53:35.176401  505701 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 11:53:35.176429  505701 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-497735/.minikube/addons for local assets ...
	I0923 11:53:35.176504  505701 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-497735/.minikube/files for local assets ...
	I0923 11:53:35.176528  505701 start.go:296] duration metric: took 89.681168ms for postStartSetup
	I0923 11:53:35.176568  505701 main.go:141] libmachine: (addons-825629) Calling .GetConfigRaw
	I0923 11:53:35.177180  505701 main.go:141] libmachine: (addons-825629) Calling .GetIP
	I0923 11:53:35.180288  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.180766  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:35.180790  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.181118  505701 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/config.json ...
	I0923 11:53:35.181327  505701 start.go:128] duration metric: took 24.001601375s to createHost
	I0923 11:53:35.181354  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:35.183955  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.184461  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:35.184483  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.184629  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:35.184817  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:35.184971  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:35.185148  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:35.185303  505701 main.go:141] libmachine: Using SSH client type: native
	I0923 11:53:35.185472  505701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0923 11:53:35.185482  505701 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 11:53:35.295332  505701 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727092415.272695381
	
	I0923 11:53:35.295359  505701 fix.go:216] guest clock: 1727092415.272695381
	I0923 11:53:35.295366  505701 fix.go:229] Guest: 2024-09-23 11:53:35.272695381 +0000 UTC Remote: 2024-09-23 11:53:35.181339313 +0000 UTC m=+24.109732388 (delta=91.356068ms)
	I0923 11:53:35.295412  505701 fix.go:200] guest clock delta is within tolerance: 91.356068ms
	I0923 11:53:35.295417  505701 start.go:83] releasing machines lock for "addons-825629", held for 24.115779104s
	I0923 11:53:35.295438  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:53:35.295751  505701 main.go:141] libmachine: (addons-825629) Calling .GetIP
	I0923 11:53:35.298369  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.298710  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:35.298740  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.298915  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:53:35.299433  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:53:35.299608  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:53:35.299695  505701 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 11:53:35.299746  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:35.299853  505701 ssh_runner.go:195] Run: cat /version.json
	I0923 11:53:35.299880  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:53:35.302564  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.302671  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.302899  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:35.302945  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.303040  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:35.303070  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:35.303078  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:35.303219  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:53:35.303294  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:35.303377  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:53:35.303451  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:35.303508  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:53:35.303580  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:53:35.303648  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:53:35.379990  505701 ssh_runner.go:195] Run: systemctl --version
	I0923 11:53:35.416238  505701 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0923 11:53:35.421896  505701 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:53:35.421962  505701 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:53:35.438347  505701 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 11:53:35.438401  505701 start.go:495] detecting cgroup driver to use...
	I0923 11:53:35.438578  505701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:53:35.456265  505701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:53:35.466655  505701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:53:35.476454  505701 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:53:35.476534  505701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:53:35.486522  505701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:53:35.496119  505701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:53:35.505809  505701 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:53:35.515790  505701 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:53:35.526789  505701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:53:35.536733  505701 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:53:35.546938  505701 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:53:35.556996  505701 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:53:35.565893  505701 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 11:53:35.565990  505701 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 11:53:35.576291  505701 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:53:35.585697  505701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:53:35.702811  505701 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:53:35.721183  505701 start.go:495] detecting cgroup driver to use...
	I0923 11:53:35.721326  505701 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 11:53:35.735376  505701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:53:35.752409  505701 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 11:53:35.771185  505701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 11:53:35.784439  505701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:53:35.798240  505701 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 11:53:35.829121  505701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:53:35.842709  505701 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:53:35.860288  505701 ssh_runner.go:195] Run: which cri-dockerd
	I0923 11:53:35.863998  505701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:53:35.873242  505701 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 11:53:35.888797  505701 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 11:53:35.993721  505701 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 11:53:36.113437  505701 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:53:36.113584  505701 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:53:36.129226  505701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:53:36.236748  505701 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:53:38.541805  505701 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.305009978s)
	I0923 11:53:38.541889  505701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 11:53:38.554196  505701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:53:38.567044  505701 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 11:53:38.677628  505701 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 11:53:38.796419  505701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:53:38.913790  505701 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 11:53:38.930485  505701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:53:38.943625  505701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:53:39.059756  505701 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 11:53:39.129929  505701 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 11:53:39.130057  505701 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 11:53:39.135542  505701 start.go:563] Will wait 60s for crictl version
	I0923 11:53:39.135608  505701 ssh_runner.go:195] Run: which crictl
	I0923 11:53:39.139511  505701 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 11:53:39.175720  505701 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 11:53:39.175806  505701 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:53:39.200656  505701 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:53:39.221880  505701 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 11:53:39.221933  505701 main.go:141] libmachine: (addons-825629) Calling .GetIP
	I0923 11:53:39.224602  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:39.225062  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:53:39.225098  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:53:39.225357  505701 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 11:53:39.229498  505701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:53:39.241023  505701 kubeadm.go:883] updating cluster {Name:addons-825629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-825629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:53:39.241154  505701 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:53:39.241212  505701 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:53:39.261830  505701 docker.go:685] Got preloaded images: 
	I0923 11:53:39.261857  505701 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I0923 11:53:39.261904  505701 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 11:53:39.270920  505701 ssh_runner.go:195] Run: which lz4
	I0923 11:53:39.274365  505701 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 11:53:39.277975  505701 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 11:53:39.278010  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I0923 11:53:40.301785  505701 docker.go:649] duration metric: took 1.027443431s to copy over tarball
	I0923 11:53:40.301876  505701 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 11:53:42.105571  505701 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.803655963s)
	I0923 11:53:42.105607  505701 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 11:53:42.140135  505701 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 11:53:42.149957  505701 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0923 11:53:42.166401  505701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:53:42.274587  505701 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:53:46.639480  505701 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.364849534s)
	I0923 11:53:46.639597  505701 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:53:46.657355  505701 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 11:53:46.657388  505701 cache_images.go:84] Images are preloaded, skipping loading
	I0923 11:53:46.657403  505701 kubeadm.go:934] updating node { 192.168.39.2 8443 v1.31.1 docker true true} ...
	I0923 11:53:46.657542  505701 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-825629 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-825629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:53:46.657605  505701 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 11:53:46.703775  505701 cni.go:84] Creating CNI manager for ""
	I0923 11:53:46.703806  505701 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:53:46.703820  505701 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:53:46.703847  505701 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-825629 NodeName:addons-825629 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:53:46.704021  505701 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-825629"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:53:46.704124  505701 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:53:46.713914  505701 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:53:46.714012  505701 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:53:46.723284  505701 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 11:53:46.739252  505701 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:53:46.754385  505701 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0923 11:53:46.770061  505701 ssh_runner.go:195] Run: grep 192.168.39.2	control-plane.minikube.internal$ /etc/hosts
	I0923 11:53:46.773702  505701 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:53:46.785230  505701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:53:46.896606  505701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:53:46.917395  505701 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629 for IP: 192.168.39.2
	I0923 11:53:46.917422  505701 certs.go:194] generating shared ca certs ...
	I0923 11:53:46.917444  505701 certs.go:226] acquiring lock for ca certs: {Name:mk368fdda7ea812502dc0809d673a3fd993c0e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:46.917613  505701 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.key
	I0923 11:53:47.099178  505701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt ...
	I0923 11:53:47.099213  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt: {Name:mka44d28447b4007412df04a88120bd797ed9ccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:47.099391  505701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-497735/.minikube/ca.key ...
	I0923 11:53:47.099402  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/ca.key: {Name:mk86c7ae9c374bac3fcbb5afa38f59ca97426fbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:47.099472  505701 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.key
	I0923 11:53:47.532080  505701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.crt ...
	I0923 11:53:47.532121  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.crt: {Name:mk03fdc9e5fba070da76c0a660e0a4ed9cfcb6fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:47.532321  505701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.key ...
	I0923 11:53:47.532334  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.key: {Name:mkb2ed42c7b890da489b13feeed934603c2ee0d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:47.532412  505701 certs.go:256] generating profile certs ...
	I0923 11:53:47.532475  505701 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.key
	I0923 11:53:47.532492  505701 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt with IP's: []
	I0923 11:53:47.601272  505701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt ...
	I0923 11:53:47.601309  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: {Name:mk932eadef9e098d09779d1f5b23d7591c75f069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:47.601485  505701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.key ...
	I0923 11:53:47.601496  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.key: {Name:mk24bc09f01df408dec9a6e3fdd23300da5c484c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:47.601559  505701 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.key.9d09bf84
	I0923 11:53:47.601578  505701 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.crt.9d09bf84 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2]
	I0923 11:53:47.726968  505701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.crt.9d09bf84 ...
	I0923 11:53:47.727003  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.crt.9d09bf84: {Name:mk7eaf5da588ab01f787074db7a468ba29d27e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:47.727202  505701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.key.9d09bf84 ...
	I0923 11:53:47.727215  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.key.9d09bf84: {Name:mk0053c2b91e5ebafc5d79647bc0e11a30d24701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:47.727289  505701 certs.go:381] copying /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.crt.9d09bf84 -> /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.crt
	I0923 11:53:47.727396  505701 certs.go:385] copying /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.key.9d09bf84 -> /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.key
	I0923 11:53:47.727453  505701 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/proxy-client.key
	I0923 11:53:47.727472  505701 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/proxy-client.crt with IP's: []
	I0923 11:53:47.960254  505701 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/proxy-client.crt ...
	I0923 11:53:47.960301  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/proxy-client.crt: {Name:mk1021d9dac4ee3580ccf44ab813b629fd2986e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:47.960497  505701 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/proxy-client.key ...
	I0923 11:53:47.960511  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/proxy-client.key: {Name:mkecbdec204d32a6f6458fef5ebcfce2ecf0baa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:47.960688  505701 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 11:53:47.960726  505701 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:53:47.960753  505701 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:53:47.960777  505701 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem (1679 bytes)
	I0923 11:53:47.961424  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:53:47.985837  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:53:48.008914  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:53:48.031135  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:53:48.053754  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 11:53:48.076832  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:53:48.099139  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:53:48.122012  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 11:53:48.144459  505701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:53:48.167055  505701 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:53:48.183588  505701 ssh_runner.go:195] Run: openssl version
	I0923 11:53:48.189391  505701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:53:48.200157  505701 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:53:48.204591  505701 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:53 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:53:48.204668  505701 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:53:48.210157  505701 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:53:48.220579  505701 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:53:48.224481  505701 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:53:48.224534  505701 kubeadm.go:392] StartCluster: {Name:addons-825629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-825629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:53:48.224645  505701 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 11:53:48.239262  505701 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:53:48.248813  505701 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:53:48.257958  505701 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:53:48.266808  505701 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:53:48.266834  505701 kubeadm.go:157] found existing configuration files:
	
	I0923 11:53:48.266882  505701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:53:48.275783  505701 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:53:48.275869  505701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:53:48.284778  505701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:53:48.294017  505701 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:53:48.294093  505701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:53:48.303866  505701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:53:48.313054  505701 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:53:48.313135  505701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:53:48.322121  505701 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:53:48.330894  505701 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:53:48.330955  505701 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:53:48.340435  505701 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 11:53:48.386461  505701 kubeadm.go:310] W0923 11:53:48.367041    1510 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:53:48.387382  505701 kubeadm.go:310] W0923 11:53:48.368154    1510 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:53:48.516848  505701 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:53:58.324102  505701 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 11:53:58.324198  505701 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:53:58.324297  505701 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:53:58.324406  505701 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:53:58.324530  505701 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 11:53:58.324613  505701 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:53:58.326131  505701 out.go:235]   - Generating certificates and keys ...
	I0923 11:53:58.326216  505701 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:53:58.326302  505701 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:53:58.326372  505701 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:53:58.326446  505701 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:53:58.326544  505701 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:53:58.326630  505701 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:53:58.326679  505701 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:53:58.326809  505701 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-825629 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0923 11:53:58.326863  505701 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:53:58.326995  505701 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-825629 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0923 11:53:58.327081  505701 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:53:58.327174  505701 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:53:58.327219  505701 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:53:58.327278  505701 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:53:58.327326  505701 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:53:58.327404  505701 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 11:53:58.327474  505701 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:53:58.327570  505701 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:53:58.327620  505701 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:53:58.327694  505701 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:53:58.327754  505701 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:53:58.329396  505701 out.go:235]   - Booting up control plane ...
	I0923 11:53:58.329510  505701 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:53:58.329587  505701 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:53:58.329644  505701 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:53:58.329733  505701 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:53:58.329812  505701 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:53:58.329846  505701 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:53:58.329983  505701 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 11:53:58.330136  505701 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 11:53:58.330228  505701 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005601274s
	I0923 11:53:58.330311  505701 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 11:53:58.330366  505701 kubeadm.go:310] [api-check] The API server is healthy after 5.001739899s
	I0923 11:53:58.330453  505701 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 11:53:58.330594  505701 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 11:53:58.330672  505701 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 11:53:58.330927  505701 kubeadm.go:310] [mark-control-plane] Marking the node addons-825629 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 11:53:58.331002  505701 kubeadm.go:310] [bootstrap-token] Using token: 7xk2so.is3be03inpk3eb3p
	I0923 11:53:58.332468  505701 out.go:235]   - Configuring RBAC rules ...
	I0923 11:53:58.332616  505701 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 11:53:58.332709  505701 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 11:53:58.332867  505701 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 11:53:58.332985  505701 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 11:53:58.333096  505701 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 11:53:58.333206  505701 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 11:53:58.333360  505701 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 11:53:58.333401  505701 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 11:53:58.333441  505701 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 11:53:58.333450  505701 kubeadm.go:310] 
	I0923 11:53:58.333499  505701 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 11:53:58.333505  505701 kubeadm.go:310] 
	I0923 11:53:58.333593  505701 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 11:53:58.333605  505701 kubeadm.go:310] 
	I0923 11:53:58.333627  505701 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 11:53:58.333683  505701 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 11:53:58.333727  505701 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 11:53:58.333732  505701 kubeadm.go:310] 
	I0923 11:53:58.333797  505701 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 11:53:58.333813  505701 kubeadm.go:310] 
	I0923 11:53:58.333883  505701 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 11:53:58.333891  505701 kubeadm.go:310] 
	I0923 11:53:58.333961  505701 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 11:53:58.334057  505701 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 11:53:58.334121  505701 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 11:53:58.334124  505701 kubeadm.go:310] 
	I0923 11:53:58.334198  505701 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 11:53:58.334262  505701 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 11:53:58.334268  505701 kubeadm.go:310] 
	I0923 11:53:58.334336  505701 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7xk2so.is3be03inpk3eb3p \
	I0923 11:53:58.334428  505701 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a43abe2cb7513769edfdc2fac847cabc585ae9a822aad0499c587380afedaf14 \
	I0923 11:53:58.334449  505701 kubeadm.go:310] 	--control-plane 
	I0923 11:53:58.334453  505701 kubeadm.go:310] 
	I0923 11:53:58.334573  505701 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 11:53:58.334591  505701 kubeadm.go:310] 
	I0923 11:53:58.334734  505701 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7xk2so.is3be03inpk3eb3p \
	I0923 11:53:58.334918  505701 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a43abe2cb7513769edfdc2fac847cabc585ae9a822aad0499c587380afedaf14 
	I0923 11:53:58.334932  505701 cni.go:84] Creating CNI manager for ""
	I0923 11:53:58.334949  505701 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:53:58.336404  505701 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 11:53:58.337671  505701 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 11:53:58.348768  505701 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 11:53:58.366348  505701 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:53:58.366483  505701 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-825629 minikube.k8s.io/updated_at=2024_09_23T11_53_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=addons-825629 minikube.k8s.io/primary=true
	I0923 11:53:58.366508  505701 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:53:58.383606  505701 ops.go:34] apiserver oom_adj: -16
	I0923 11:53:58.483321  505701 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:53:58.983998  505701 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:53:59.484394  505701 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:53:59.983500  505701 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:54:00.483828  505701 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:54:00.984242  505701 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:54:01.483436  505701 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:54:01.984399  505701 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:54:02.064277  505701 kubeadm.go:1113] duration metric: took 3.697888363s to wait for elevateKubeSystemPrivileges
	I0923 11:54:02.064324  505701 kubeadm.go:394] duration metric: took 13.839794308s to StartCluster
	I0923 11:54:02.064350  505701 settings.go:142] acquiring lock: {Name:mke8a2c3e1b68f8bfc3d2a76cd3ad640f66f3e7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:54:02.064510  505701 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 11:54:02.065017  505701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/kubeconfig: {Name:mk0cef7f71c4fa7d96e459b50c6c36de6d1dd40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:54:02.065241  505701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 11:54:02.065272  505701 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 11:54:02.065245  505701 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:54:02.065366  505701 addons.go:69] Setting yakd=true in profile "addons-825629"
	I0923 11:54:02.065387  505701 addons.go:234] Setting addon yakd=true in "addons-825629"
	I0923 11:54:02.065435  505701 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-825629"
	I0923 11:54:02.065464  505701 addons.go:69] Setting ingress=true in profile "addons-825629"
	I0923 11:54:02.065485  505701 addons.go:69] Setting ingress-dns=true in profile "addons-825629"
	I0923 11:54:02.065515  505701 addons.go:234] Setting addon ingress=true in "addons-825629"
	I0923 11:54:02.065528  505701 addons.go:234] Setting addon ingress-dns=true in "addons-825629"
	I0923 11:54:02.065556  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.065584  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.065582  505701 addons.go:69] Setting storage-provisioner=true in profile "addons-825629"
	I0923 11:54:02.065433  505701 addons.go:69] Setting default-storageclass=true in profile "addons-825629"
	I0923 11:54:02.065625  505701 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-825629"
	I0923 11:54:02.065505  505701 config.go:182] Loaded profile config "addons-825629": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:54:02.065466  505701 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-825629"
	I0923 11:54:02.065729  505701 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-825629"
	I0923 11:54:02.065760  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.065427  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.065437  505701 addons.go:69] Setting gcp-auth=true in profile "addons-825629"
	I0923 11:54:02.065850  505701 mustload.go:65] Loading cluster: addons-825629
	I0923 11:54:02.066006  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.066022  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.066033  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.066064  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.066095  505701 config.go:182] Loaded profile config "addons-825629": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:54:02.065447  505701 addons.go:69] Setting cloud-spanner=true in profile "addons-825629"
	I0923 11:54:02.066142  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.066154  505701 addons.go:234] Setting addon cloud-spanner=true in "addons-825629"
	I0923 11:54:02.066160  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.065455  505701 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-825629"
	I0923 11:54:02.066193  505701 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-825629"
	I0923 11:54:02.065472  505701 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-825629"
	I0923 11:54:02.065476  505701 addons.go:69] Setting metrics-server=true in profile "addons-825629"
	I0923 11:54:02.066214  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.066249  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.066225  505701 addons.go:234] Setting addon metrics-server=true in "addons-825629"
	I0923 11:54:02.065485  505701 addons.go:69] Setting volumesnapshots=true in profile "addons-825629"
	I0923 11:54:02.066320  505701 addons.go:234] Setting addon volumesnapshots=true in "addons-825629"
	I0923 11:54:02.065424  505701 addons.go:69] Setting inspektor-gadget=true in profile "addons-825629"
	I0923 11:54:02.066338  505701 addons.go:234] Setting addon inspektor-gadget=true in "addons-825629"
	I0923 11:54:02.065490  505701 addons.go:69] Setting registry=true in profile "addons-825629"
	I0923 11:54:02.066350  505701 addons.go:234] Setting addon registry=true in "addons-825629"
	I0923 11:54:02.065607  505701 addons.go:234] Setting addon storage-provisioner=true in "addons-825629"
	I0923 11:54:02.066210  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.065480  505701 addons.go:69] Setting volcano=true in profile "addons-825629"
	I0923 11:54:02.066417  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.066443  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.066466  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.066532  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.066626  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.066422  505701 addons.go:234] Setting addon volcano=true in "addons-825629"
	I0923 11:54:02.066789  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.066809  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.066837  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.066871  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.066888  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.066387  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.066940  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.066963  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.067029  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.067042  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.067067  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.067221  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.067279  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.067483  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.067620  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.067643  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.067657  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.067681  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.068241  505701 out.go:177] * Verifying Kubernetes components...
	I0923 11:54:02.069936  505701 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:54:02.087394  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32859
	I0923 11:54:02.087741  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I0923 11:54:02.087768  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0923 11:54:02.088122  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.088380  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.088500  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.088889  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.088912  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.089009  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.089025  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.089040  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.089083  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.089173  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43127
	I0923 11:54:02.089270  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.089482  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.089503  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.089595  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.089652  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.089907  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.089950  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.090080  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.090104  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.090316  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.090469  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.092074  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.107150  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.107199  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.107397  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.107444  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.107899  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.107958  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.108523  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.108562  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.109378  505701 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-825629"
	I0923 11:54:02.109431  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.109807  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.109856  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.116171  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.116224  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.128284  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37061
	I0923 11:54:02.128865  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.129479  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.129508  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.129904  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.130447  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.130493  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.136442  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0923 11:54:02.136921  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43963
	I0923 11:54:02.137135  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.137650  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.137671  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.137709  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.138183  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.138309  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.138330  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.138465  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.138689  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.139305  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.139349  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.139719  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37747
	I0923 11:54:02.140169  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.140697  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.140720  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.141093  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.141705  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.141718  505701 addons.go:234] Setting addon default-storageclass=true in "addons-825629"
	I0923 11:54:02.141750  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.141761  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:02.142134  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.142174  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.145919  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0923 11:54:02.146142  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45871
	I0923 11:54:02.146843  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.147464  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.147492  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.147873  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.148445  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.148494  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.150933  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46157
	I0923 11:54:02.151428  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.151568  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.151589  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34125
	I0923 11:54:02.151903  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.151925  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.152004  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.152161  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.152182  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.152369  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.152579  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.152600  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.152687  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.152738  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.153651  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.154092  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.154137  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.154683  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.154695  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.154727  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.154776  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44785
	I0923 11:54:02.155788  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.156464  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.156490  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.156675  505701 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 11:54:02.156860  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.157378  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.157419  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.159064  505701 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 11:54:02.160371  505701 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 11:54:02.163140  505701 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:54:02.163175  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 11:54:02.163204  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.166472  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
	I0923 11:54:02.167166  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.167313  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.168025  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.168049  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.168289  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.168498  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.168641  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.168778  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.171400  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35829
	I0923 11:54:02.172116  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.172138  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.172582  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.172597  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.172855  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.173558  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.173580  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.174054  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.174289  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.174858  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.176905  505701 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 11:54:02.178140  505701 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 11:54:02.178165  505701 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 11:54:02.178192  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.181828  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.182469  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.183096  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.183122  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.183473  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.183639  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.183723  505701 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 11:54:02.183807  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.183938  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.185048  505701 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:54:02.185075  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 11:54:02.185094  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.188380  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.188805  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.188830  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.189109  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45387
	I0923 11:54:02.189275  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.189600  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.189757  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.189888  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.192175  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35831
	I0923 11:54:02.192205  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0923 11:54:02.192220  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42899
	I0923 11:54:02.192721  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.192828  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.192885  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.192937  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.193397  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.193420  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.193813  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.193831  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.194239  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.194455  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.195650  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33937
	I0923 11:54:02.195729  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.195750  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.196420  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.196524  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.196577  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.197161  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.197203  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.197893  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.197913  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.198368  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.198557  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.199669  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33291
	I0923 11:54:02.200036  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34577
	I0923 11:54:02.200247  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.200438  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.200460  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.200470  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.201085  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.201103  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.201165  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.201722  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.201758  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.201791  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.202142  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.202162  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.202632  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.202665  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.203358  505701 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 11:54:02.203735  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.203778  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.204006  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.204053  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39631
	I0923 11:54:02.204633  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.204674  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.204924  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.205121  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.205142  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.206052  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.206215  505701 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 11:54:02.206372  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
	I0923 11:54:02.206817  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.206862  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.207163  505701 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 11:54:02.207202  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.208046  505701 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 11:54:02.208070  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 11:54:02.208087  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.208089  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.208102  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.208538  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.208750  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.209850  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0923 11:54:02.210011  505701 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 11:54:02.210650  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.211271  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.211292  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.211713  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.211920  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.211976  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.212742  505701 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 11:54:02.213029  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.213888  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.213969  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.213988  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.214267  505701 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 11:54:02.214272  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.214473  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.214597  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.214775  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.214829  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.215351  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41963
	I0923 11:54:02.215606  505701 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 11:54:02.215627  505701 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 11:54:02.215806  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.216458  505701 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:54:02.216462  505701 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 11:54:02.216476  505701 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 11:54:02.216551  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.216701  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.216724  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.217703  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.218004  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.218076  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45085
	I0923 11:54:02.218246  505701 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 11:54:02.218266  505701 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 11:54:02.218292  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.218514  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.219385  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.219404  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.219624  505701 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 11:54:02.219684  505701 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:54:02.220003  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.220917  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:02.220962  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:02.222633  505701 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:54:02.222682  505701 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 11:54:02.225788  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.225860  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.225906  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.225926  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.225941  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.225964  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.226988  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.227140  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.228118  505701 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:54:02.228139  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 11:54:02.228159  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.228229  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.228277  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.228290  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.229267  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.229776  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.230122  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.230410  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.230667  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33785
	I0923 11:54:02.231154  505701 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 11:54:02.231417  505701 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:54:02.232416  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.233066  505701 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:54:02.233081  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34655
	I0923 11:54:02.233086  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:54:02.233110  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.233295  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.233317  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.233378  505701 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 11:54:02.233389  505701 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 11:54:02.233405  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.233551  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.234140  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.234160  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.234596  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.234826  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.234882  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.235472  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.235985  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.236006  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.236388  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.236642  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.236822  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.236986  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.237532  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.238211  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.238724  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.238744  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.238907  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.238952  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.239141  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.239320  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.239463  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.239687  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
	I0923 11:54:02.239842  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.240038  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45633
	I0923 11:54:02.240406  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.240427  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.240472  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.240648  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.240741  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.240899  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.240962  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.241116  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.241443  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.241458  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.241802  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.241820  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.242368  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.242431  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.242470  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42999
	I0923 11:54:02.242632  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.242669  505701 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 11:54:02.242880  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.243140  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:02.243509  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.244027  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:02.244049  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:02.244340  505701 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 11:54:02.244418  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:02.244451  505701 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:54:02.244467  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 11:54:02.244490  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.244584  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:02.244979  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.246184  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.246434  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:02.246436  505701 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:54:02.246491  505701 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:54:02.246506  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.246525  505701 out.go:177]   - Using image docker.io/busybox:stable
	I0923 11:54:02.246574  505701 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 11:54:02.248082  505701 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 11:54:02.248322  505701 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:54:02.248342  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 11:54:02.248361  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.248434  505701 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 11:54:02.248447  505701 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 11:54:02.248462  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.248579  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.249017  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.249038  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.249349  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.249500  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.249659  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.249785  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.250697  505701 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 11:54:02.251867  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.252098  505701 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 11:54:02.252109  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 11:54:02.252121  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:02.252252  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.252685  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.252708  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.252811  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.252837  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.252894  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.253070  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.253114  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.253245  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.253271  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.253437  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.253467  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.253826  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.253956  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.253977  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.254012  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.254165  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.254326  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.254454  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.254559  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.255740  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	W0923 11:54:02.255802  505701 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48786->192.168.39.2:22: read: connection reset by peer
	I0923 11:54:02.255832  505701 retry.go:31] will retry after 321.020759ms: ssh: handshake failed: read tcp 192.168.39.1:48786->192.168.39.2:22: read: connection reset by peer
	I0923 11:54:02.256120  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:02.256153  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:02.256414  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:02.256575  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:02.256726  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:02.256821  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:02.431942  505701 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:54:02.432161  505701 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 11:54:02.511259  505701 node_ready.go:35] waiting up to 6m0s for node "addons-825629" to be "Ready" ...
	I0923 11:54:02.514846  505701 node_ready.go:49] node "addons-825629" has status "Ready":"True"
	I0923 11:54:02.514876  505701 node_ready.go:38] duration metric: took 3.576413ms for node "addons-825629" to be "Ready" ...
	I0923 11:54:02.514887  505701 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:54:02.523973  505701 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-825629" in "kube-system" namespace to be "Ready" ...
	I0923 11:54:02.606545  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:54:02.634485  505701 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 11:54:02.634523  505701 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 11:54:02.658348  505701 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 11:54:02.658382  505701 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 11:54:02.661790  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 11:54:02.672943  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 11:54:02.719487  505701 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 11:54:02.719519  505701 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 11:54:02.723179  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:54:02.733171  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:54:02.769705  505701 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:54:02.769741  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 11:54:02.801313  505701 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 11:54:02.801346  505701 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 11:54:02.866442  505701 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 11:54:02.866478  505701 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 11:54:02.920029  505701 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 11:54:02.920077  505701 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 11:54:02.925680  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:54:02.938088  505701 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 11:54:02.938130  505701 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 11:54:02.973531  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 11:54:02.990634  505701 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 11:54:02.990672  505701 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 11:54:03.001602  505701 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 11:54:03.001635  505701 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 11:54:03.005317  505701 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:54:03.005339  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 11:54:03.039721  505701 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:54:03.039760  505701 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 11:54:03.233751  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:54:03.246518  505701 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:54:03.246562  505701 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 11:54:03.290437  505701 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 11:54:03.290468  505701 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 11:54:03.294633  505701 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 11:54:03.294659  505701 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 11:54:03.364279  505701 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 11:54:03.364314  505701 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 11:54:03.398312  505701 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 11:54:03.398358  505701 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 11:54:03.437912  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:54:03.698974  505701 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 11:54:03.699009  505701 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 11:54:03.713384  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:54:03.798983  505701 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 11:54:03.799015  505701 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 11:54:03.827012  505701 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 11:54:03.827043  505701 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 11:54:03.859101  505701 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 11:54:03.859127  505701 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 11:54:04.295279  505701 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:54:04.295305  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 11:54:04.316367  505701 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 11:54:04.316392  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 11:54:04.529797  505701 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:54:04.529823  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 11:54:04.533797  505701 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 11:54:04.533824  505701 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 11:54:04.550227  505701 pod_ready.go:93] pod "etcd-addons-825629" in "kube-system" namespace has status "Ready":"True"
	I0923 11:54:04.550269  505701 pod_ready.go:82] duration metric: took 2.026250317s for pod "etcd-addons-825629" in "kube-system" namespace to be "Ready" ...
	I0923 11:54:04.550286  505701 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-825629" in "kube-system" namespace to be "Ready" ...
	I0923 11:54:04.722558  505701 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 11:54:04.722587  505701 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 11:54:04.728522  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:54:04.845386  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:54:04.906318  505701 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 11:54:04.906358  505701 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 11:54:04.962074  505701 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 11:54:04.962109  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 11:54:05.098882  505701 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:54:05.098914  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 11:54:05.383089  505701 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 11:54:05.383118  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 11:54:05.601916  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:54:05.697640  505701 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.265446423s)
	I0923 11:54:05.697674  505701 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0923 11:54:05.697730  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.091139538s)
	I0923 11:54:05.697795  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:05.697816  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:05.698158  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:05.698179  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:05.698189  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:05.698197  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:05.698605  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:05.698624  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:05.722011  505701 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:54:05.722039  505701 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 11:54:06.052132  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:54:06.211505  505701 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-825629" context rescaled to 1 replicas
	I0923 11:54:06.589636  505701 pod_ready.go:103] pod "kube-apiserver-addons-825629" in "kube-system" namespace has status "Ready":"False"
	I0923 11:54:08.617806  505701 pod_ready.go:103] pod "kube-apiserver-addons-825629" in "kube-system" namespace has status "Ready":"False"
	I0923 11:54:09.213346  505701 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 11:54:09.213391  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:09.216648  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:09.217092  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:09.217118  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:09.217319  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:09.217566  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:09.217813  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:09.218020  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:09.685252  505701 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 11:54:09.975621  505701 addons.go:234] Setting addon gcp-auth=true in "addons-825629"
	I0923 11:54:09.975685  505701 host.go:66] Checking if "addons-825629" exists ...
	I0923 11:54:09.976081  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:09.976150  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:09.992705  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34019
	I0923 11:54:09.993234  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:09.993776  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:09.993806  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:09.994191  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:09.994924  505701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 11:54:09.994981  505701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 11:54:10.011738  505701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0923 11:54:10.012255  505701 main.go:141] libmachine: () Calling .GetVersion
	I0923 11:54:10.012915  505701 main.go:141] libmachine: Using API Version  1
	I0923 11:54:10.012946  505701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 11:54:10.013291  505701 main.go:141] libmachine: () Calling .GetMachineName
	I0923 11:54:10.013530  505701 main.go:141] libmachine: (addons-825629) Calling .GetState
	I0923 11:54:10.015166  505701 main.go:141] libmachine: (addons-825629) Calling .DriverName
	I0923 11:54:10.015446  505701 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 11:54:10.015483  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHHostname
	I0923 11:54:10.018257  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:10.018779  505701 main.go:141] libmachine: (addons-825629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:e2:9c", ip: ""} in network mk-addons-825629: {Iface:virbr1 ExpiryTime:2024-09-23 12:53:25 +0000 UTC Type:0 Mac:52:54:00:cc:e2:9c Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:addons-825629 Clientid:01:52:54:00:cc:e2:9c}
	I0923 11:54:10.018811  505701 main.go:141] libmachine: (addons-825629) DBG | domain addons-825629 has defined IP address 192.168.39.2 and MAC address 52:54:00:cc:e2:9c in network mk-addons-825629
	I0923 11:54:10.019085  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHPort
	I0923 11:54:10.019281  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHKeyPath
	I0923 11:54:10.019477  505701 main.go:141] libmachine: (addons-825629) Calling .GetSSHUsername
	I0923 11:54:10.019706  505701 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/addons-825629/id_rsa Username:docker}
	I0923 11:54:11.094728  505701 pod_ready.go:103] pod "kube-apiserver-addons-825629" in "kube-system" namespace has status "Ready":"False"
	I0923 11:54:12.158003  505701 pod_ready.go:93] pod "kube-apiserver-addons-825629" in "kube-system" namespace has status "Ready":"True"
	I0923 11:54:12.158033  505701 pod_ready.go:82] duration metric: took 7.607736552s for pod "kube-apiserver-addons-825629" in "kube-system" namespace to be "Ready" ...
	I0923 11:54:12.158048  505701 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-825629" in "kube-system" namespace to be "Ready" ...
	I0923 11:54:12.267730  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.605892797s)
	I0923 11:54:12.267786  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.594807227s)
	I0923 11:54:12.267837  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:12.267795  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:12.267910  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:12.267853  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:12.268204  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:12.268219  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:12.268229  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:12.268236  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:12.270254  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:12.270267  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:12.270277  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:12.270287  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:12.270301  505701 addons.go:475] Verifying addon ingress=true in "addons-825629"
	I0923 11:54:12.270298  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:12.270326  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:12.270341  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:12.270351  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:12.270586  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:12.270600  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:12.271633  505701 out.go:177] * Verifying ingress addon...
	I0923 11:54:12.273649  505701 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 11:54:12.280368  505701 pod_ready.go:93] pod "kube-controller-manager-addons-825629" in "kube-system" namespace has status "Ready":"True"
	I0923 11:54:12.280394  505701 pod_ready.go:82] duration metric: took 122.337235ms for pod "kube-controller-manager-addons-825629" in "kube-system" namespace to be "Ready" ...
	I0923 11:54:12.280404  505701 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-825629" in "kube-system" namespace to be "Ready" ...
	I0923 11:54:12.304036  505701 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 11:54:12.304067  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:12.320809  505701 pod_ready.go:93] pod "kube-scheduler-addons-825629" in "kube-system" namespace has status "Ready":"True"
	I0923 11:54:12.320840  505701 pod_ready.go:82] duration metric: took 40.427961ms for pod "kube-scheduler-addons-825629" in "kube-system" namespace to be "Ready" ...
	I0923 11:54:12.320852  505701 pod_ready.go:39] duration metric: took 9.805950527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:54:12.320877  505701 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:54:12.320946  505701 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:54:12.805946  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:13.320746  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:13.864514  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:14.301605  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:14.654263  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.921048901s)
	I0923 11:54:14.654330  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.654343  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.654378  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.680811362s)
	I0923 11:54:14.654409  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.654423  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.654331  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.728621482s)
	I0923 11:54:14.654472  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.42067879s)
	I0923 11:54:14.654520  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.216568032s)
	I0923 11:54:14.654577  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.654591  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.654578  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.654607  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.654525  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.654679  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.654646  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.941218619s)
	I0923 11:54:14.654818  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.654829  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.654852  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.926287906s)
	I0923 11:54:14.654863  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.654884  505701 main.go:141] libmachine: Successfully made call to close driver server
	W0923 11:54:14.654888  505701 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:54:14.654901  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.654919  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.654919  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.654943  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.654945  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.654945  505701 retry.go:31] will retry after 197.68668ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:54:14.654951  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.654983  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.654988  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.654996  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.655004  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.655010  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.655022  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.809602689s)
	I0923 11:54:14.654955  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.655048  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.655057  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.654896  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.655180  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.053228739s)
	I0923 11:54:14.655195  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.655202  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.655245  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.655264  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.655282  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.655289  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.655289  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.655308  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.655322  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.655329  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.655337  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.655389  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.655460  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.655481  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.655500  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.655727  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.655753  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.655759  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.655768  505701 addons.go:475] Verifying addon registry=true in "addons-825629"
	I0923 11:54:14.656043  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.656086  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.656105  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.656129  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.656136  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.656966  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.657009  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.657015  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.657022  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.657029  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.657092  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.657114  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.657120  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.655868  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.657454  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.658270  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.658285  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.658295  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.658303  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.658323  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.658332  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.658341  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.658345  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.658348  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.658393  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.658412  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.658418  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.658606  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.935393835s)
	I0923 11:54:14.658632  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.658642  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.658696  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.658714  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.658723  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.658778  505701 addons.go:475] Verifying addon metrics-server=true in "addons-825629"
	I0923 11:54:14.658958  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.658983  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.658991  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.659100  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.659069  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.659115  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.659125  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.659131  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.659421  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.659435  505701 out.go:177] * Verifying registry addon...
	I0923 11:54:14.659458  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.659471  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.661192  505701 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-825629 service yakd-dashboard -n yakd-dashboard
	
	I0923 11:54:14.662259  505701 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 11:54:14.698708  505701 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 11:54:14.698741  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:14.704498  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.704523  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.704797  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.704812  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.714445  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:14.714467  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:14.714746  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:14.714769  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:14.714783  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:14.853296  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:54:14.856293  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:15.212645  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:15.310687  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:15.683325  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:15.795506  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:15.823500  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.771322743s)
	I0923 11:54:15.823555  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:15.823573  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:15.823613  505701 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.808139401s)
	I0923 11:54:15.823662  505701 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.502700468s)
	I0923 11:54:15.823682  505701 api_server.go:72] duration metric: took 13.758308426s to wait for apiserver process to appear ...
	I0923 11:54:15.823693  505701 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:54:15.823714  505701 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0923 11:54:15.823941  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:15.823947  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:15.823963  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:15.823973  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:15.823981  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:15.824261  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:15.824299  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:15.824317  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:15.824332  505701 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-825629"
	I0923 11:54:15.825482  505701 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 11:54:15.825495  505701 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:54:15.827399  505701 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 11:54:15.828031  505701 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 11:54:15.828995  505701 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 11:54:15.829019  505701 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 11:54:15.863847  505701 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0923 11:54:15.867106  505701 api_server.go:141] control plane version: v1.31.1
	I0923 11:54:15.867145  505701 api_server.go:131] duration metric: took 43.442342ms to wait for apiserver health ...
	I0923 11:54:15.867157  505701 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:54:15.889941  505701 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 11:54:15.889967  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:15.901818  505701 system_pods.go:59] 18 kube-system pods found
	I0923 11:54:15.901860  505701 system_pods.go:61] "coredns-7c65d6cfc9-6xckn" [ec277d8a-8d88-489d-a8d3-d8217cf3d358] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 11:54:15.901870  505701 system_pods.go:61] "coredns-7c65d6cfc9-mg246" [1c261074-5608-462d-bf9f-1fdde5495da5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0923 11:54:15.901882  505701 system_pods.go:61] "csi-hostpath-attacher-0" [3596d2b0-7c88-46de-a53d-7604ac668254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:54:15.901892  505701 system_pods.go:61] "csi-hostpath-resizer-0" [dac60e72-6cea-4aa6-bfb5-56323c0ee3ed] Pending
	I0923 11:54:15.901900  505701 system_pods.go:61] "csi-hostpathplugin-q2tzs" [4fdb02a0-da37-422f-80a5-d5b51bf189c7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:54:15.901905  505701 system_pods.go:61] "etcd-addons-825629" [01ad5e2a-befd-4d33-b794-1dcad1de1019] Running
	I0923 11:54:15.901916  505701 system_pods.go:61] "kube-apiserver-addons-825629" [10debb6d-7207-4ff6-b146-6dfb8f62c005] Running
	I0923 11:54:15.901922  505701 system_pods.go:61] "kube-controller-manager-addons-825629" [1bb2c25d-2425-41f9-9109-a2a0eeb033e8] Running
	I0923 11:54:15.901931  505701 system_pods.go:61] "kube-ingress-dns-minikube" [519d6921-179f-451f-bdeb-48bade9e5aa4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 11:54:15.901937  505701 system_pods.go:61] "kube-proxy-jktfj" [7517c583-f4ce-4658-bf73-f889d32819a5] Running
	I0923 11:54:15.901943  505701 system_pods.go:61] "kube-scheduler-addons-825629" [b45d0f66-9991-4841-bacd-842d84aa8f5f] Running
	I0923 11:54:15.901954  505701 system_pods.go:61] "metrics-server-84c5f94fbc-p7599" [0ea45ea8-1de0-4424-9d69-1ceaa23c38c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:54:15.901973  505701 system_pods.go:61] "nvidia-device-plugin-daemonset-hm4kd" [ad43bcf9-fa90-4052-85cb-9b74ad1d5716] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 11:54:15.901990  505701 system_pods.go:61] "registry-66c9cd494c-wc269" [93368544-2bdd-4676-901f-cc2b1f4cfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:54:15.901998  505701 system_pods.go:61] "registry-proxy-49g5s" [da46be15-7046-4070-9862-00f5586a04c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:54:15.902006  505701 system_pods.go:61] "snapshot-controller-56fcc65765-c727g" [0fadab7d-e456-4db0-b692-51b354290449] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:54:15.902022  505701 system_pods.go:61] "snapshot-controller-56fcc65765-s2wl4" [ac36af27-b2d9-47c8-98ab-3037ac9dee8f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:54:15.902029  505701 system_pods.go:61] "storage-provisioner" [f198437f-c548-4b3f-baed-f706940d9499] Running
	I0923 11:54:15.902041  505701 system_pods.go:74] duration metric: took 34.874117ms to wait for pod list to return data ...
	I0923 11:54:15.902054  505701 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:54:15.928580  505701 default_sa.go:45] found service account: "default"
	I0923 11:54:15.928618  505701 default_sa.go:55] duration metric: took 26.54885ms for default service account to be created ...
	I0923 11:54:15.928633  505701 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:54:15.946315  505701 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 11:54:15.946351  505701 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 11:54:15.958443  505701 system_pods.go:86] 18 kube-system pods found
	I0923 11:54:15.958490  505701 system_pods.go:89] "coredns-7c65d6cfc9-6xckn" [ec277d8a-8d88-489d-a8d3-d8217cf3d358] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 11:54:15.958503  505701 system_pods.go:89] "coredns-7c65d6cfc9-mg246" [1c261074-5608-462d-bf9f-1fdde5495da5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0923 11:54:15.958516  505701 system_pods.go:89] "csi-hostpath-attacher-0" [3596d2b0-7c88-46de-a53d-7604ac668254] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:54:15.958527  505701 system_pods.go:89] "csi-hostpath-resizer-0" [dac60e72-6cea-4aa6-bfb5-56323c0ee3ed] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 11:54:15.958535  505701 system_pods.go:89] "csi-hostpathplugin-q2tzs" [4fdb02a0-da37-422f-80a5-d5b51bf189c7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:54:15.958544  505701 system_pods.go:89] "etcd-addons-825629" [01ad5e2a-befd-4d33-b794-1dcad1de1019] Running
	I0923 11:54:15.958551  505701 system_pods.go:89] "kube-apiserver-addons-825629" [10debb6d-7207-4ff6-b146-6dfb8f62c005] Running
	I0923 11:54:15.958567  505701 system_pods.go:89] "kube-controller-manager-addons-825629" [1bb2c25d-2425-41f9-9109-a2a0eeb033e8] Running
	I0923 11:54:15.958580  505701 system_pods.go:89] "kube-ingress-dns-minikube" [519d6921-179f-451f-bdeb-48bade9e5aa4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 11:54:15.958597  505701 system_pods.go:89] "kube-proxy-jktfj" [7517c583-f4ce-4658-bf73-f889d32819a5] Running
	I0923 11:54:15.958603  505701 system_pods.go:89] "kube-scheduler-addons-825629" [b45d0f66-9991-4841-bacd-842d84aa8f5f] Running
	I0923 11:54:15.958611  505701 system_pods.go:89] "metrics-server-84c5f94fbc-p7599" [0ea45ea8-1de0-4424-9d69-1ceaa23c38c6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:54:15.958619  505701 system_pods.go:89] "nvidia-device-plugin-daemonset-hm4kd" [ad43bcf9-fa90-4052-85cb-9b74ad1d5716] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0923 11:54:15.958631  505701 system_pods.go:89] "registry-66c9cd494c-wc269" [93368544-2bdd-4676-901f-cc2b1f4cfa8a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:54:15.958642  505701 system_pods.go:89] "registry-proxy-49g5s" [da46be15-7046-4070-9862-00f5586a04c9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:54:15.958650  505701 system_pods.go:89] "snapshot-controller-56fcc65765-c727g" [0fadab7d-e456-4db0-b692-51b354290449] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:54:15.958661  505701 system_pods.go:89] "snapshot-controller-56fcc65765-s2wl4" [ac36af27-b2d9-47c8-98ab-3037ac9dee8f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:54:15.958667  505701 system_pods.go:89] "storage-provisioner" [f198437f-c548-4b3f-baed-f706940d9499] Running
	I0923 11:54:15.958677  505701 system_pods.go:126] duration metric: took 30.036815ms to wait for k8s-apps to be running ...
	I0923 11:54:15.958690  505701 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:54:15.958767  505701 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:54:16.119748  505701 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:54:16.119777  505701 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 11:54:16.166739  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:16.255312  505701 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:54:16.279009  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:16.335877  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:16.666147  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:16.780004  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:16.832960  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:16.897530  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.044172625s)
	I0923 11:54:16.897588  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:16.897606  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:16.897599  505701 system_svc.go:56] duration metric: took 938.899347ms WaitForService to wait for kubelet
	I0923 11:54:16.897627  505701 kubeadm.go:582] duration metric: took 14.832250898s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:54:16.897656  505701 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:54:16.897903  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:16.897922  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:16.897987  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:16.898002  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:16.898010  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:16.898311  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:16.898367  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:16.898385  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:16.901543  505701 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 11:54:16.901576  505701 node_conditions.go:123] node cpu capacity is 2
	I0923 11:54:16.901593  505701 node_conditions.go:105] duration metric: took 3.929441ms to run NodePressure ...
	I0923 11:54:16.901610  505701 start.go:241] waiting for startup goroutines ...
	I0923 11:54:17.167007  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:17.285871  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:17.392526  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:17.488963  505701 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.233597459s)
	I0923 11:54:17.489031  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:17.489052  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:17.489515  505701 main.go:141] libmachine: (addons-825629) DBG | Closing plugin on server side
	I0923 11:54:17.489630  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:17.489655  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:17.489669  505701 main.go:141] libmachine: Making call to close driver server
	I0923 11:54:17.489681  505701 main.go:141] libmachine: (addons-825629) Calling .Close
	I0923 11:54:17.489993  505701 main.go:141] libmachine: Successfully made call to close driver server
	I0923 11:54:17.490010  505701 main.go:141] libmachine: Making call to close connection to plugin binary
	I0923 11:54:17.491424  505701 addons.go:475] Verifying addon gcp-auth=true in "addons-825629"
	I0923 11:54:17.493431  505701 out.go:177] * Verifying gcp-auth addon...
	I0923 11:54:17.496026  505701 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 11:54:17.514857  505701 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:54:17.666473  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:17.782874  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:17.836997  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:18.167546  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:18.278668  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:18.333330  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:18.667719  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:18.778836  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:18.833566  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:19.166245  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:19.277501  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:19.333014  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:19.666581  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:19.779195  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:19.832676  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:20.165649  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:20.278203  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:20.332495  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:20.666519  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:20.778680  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:20.832757  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:21.370356  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:21.371975  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:21.372244  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:21.666743  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:21.778679  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:21.832980  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:22.166380  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:22.278420  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:22.332502  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:22.667309  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:23.037335  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:23.037983  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:23.167429  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:23.278261  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:23.333854  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:23.666254  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:23.781862  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:23.833184  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:24.166717  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:24.278394  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:24.332567  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:24.665860  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:24.778825  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:24.832554  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:25.166220  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:25.277825  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:25.331722  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:25.666781  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:25.778540  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:25.833165  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:26.166450  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:26.278369  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:26.332530  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:26.665697  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:26.778423  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:26.832346  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:27.166228  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:27.278546  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:27.332505  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:27.665885  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:27.778335  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:27.832675  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:28.166467  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:28.278061  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:28.333266  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:28.666709  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:28.883137  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:28.886474  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:29.166426  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:29.278148  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:29.332110  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:29.666711  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:29.778060  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:29.832172  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:30.166154  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:30.277779  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:30.332025  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:30.666739  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:30.778743  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:30.833780  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:31.167355  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:31.278364  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:31.333569  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:31.667719  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:31.778593  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:31.833857  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:32.165673  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:32.278382  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:32.333600  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:32.685801  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:32.777970  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:32.832931  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:33.167408  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:33.277927  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:33.333532  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:33.666381  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:33.778824  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:33.832220  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:34.166623  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:34.278159  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:34.331945  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:34.666277  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:34.777562  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:34.832604  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:35.165982  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:35.278462  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:35.332438  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:35.666695  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:35.779006  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:35.833200  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:36.167011  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:36.277558  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:36.333152  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:36.665840  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:36.779131  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:36.832413  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:37.169804  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:37.552845  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:37.553287  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:37.665891  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:37.778595  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:37.833024  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:38.166612  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:38.278193  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:38.332618  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:38.666165  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:38.778629  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:38.832726  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:39.166321  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:39.278306  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:39.333025  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:39.667300  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:39.780641  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:39.881433  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:40.167056  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:40.278267  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:40.332697  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:40.666136  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:40.778005  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:40.832606  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:41.166089  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:41.278274  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:41.332363  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:41.666523  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:41.778303  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:41.832809  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:42.166593  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:42.278788  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:42.332968  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:42.665906  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:42.778632  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:42.832851  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:43.166542  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:43.277832  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:43.331805  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:43.666604  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:43.778154  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:43.832555  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:44.167340  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:44.277895  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:44.339453  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:44.666537  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:44.777767  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:44.832954  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:45.166365  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:45.277955  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:45.332849  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:45.678560  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:45.778969  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:45.832405  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:46.167030  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:46.278650  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:46.380717  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:46.666357  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:54:46.778082  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:46.837166  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:47.166742  505701 kapi.go:107] duration metric: took 32.504479508s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 11:54:47.277567  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:47.332775  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:47.778730  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:47.832603  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:48.278082  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:48.332432  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:48.780645  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:48.884667  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:49.284286  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:49.332774  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:49.777808  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:49.851829  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:50.281408  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:50.337961  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:50.777892  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:50.834780  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:51.278504  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:51.333064  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:51.778710  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:51.832832  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:52.278091  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:52.332175  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:52.777649  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:52.832756  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:53.278669  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:53.333087  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:53.778823  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:53.833285  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:54.277879  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:54.332119  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:54.778513  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:54.860207  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:55.277926  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:55.333346  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:55.778131  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:55.832745  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:56.278891  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:56.333287  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:56.778434  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:56.832765  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:57.278560  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:57.333234  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:57.778138  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:57.833337  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:58.277881  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:58.333601  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:58.778932  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:58.833391  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:59.278007  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:54:59.332874  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:54:59.783979  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:00.148386  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:00.281572  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:00.333841  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:00.778384  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:00.832753  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:01.279021  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:01.332887  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:01.783843  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:01.834494  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:02.279505  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:02.333400  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:02.777888  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:02.832532  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:03.278509  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:03.333104  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:03.781345  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:03.880315  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:04.286302  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:04.334043  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:04.778027  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:04.832356  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:05.277492  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:05.333003  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:05.777920  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:05.833120  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:06.280839  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:06.332914  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:06.779672  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:06.833153  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:07.277633  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:07.332869  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:07.778479  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:07.832778  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:08.278483  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:08.332688  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:08.778401  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:08.832779  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:09.278134  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:09.332334  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:09.780332  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:09.832192  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:10.314351  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:10.333000  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:10.780004  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:10.833655  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:11.277509  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:11.334274  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:11.778698  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:11.832964  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:12.278802  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:12.379737  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:12.778114  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:12.832399  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:13.278140  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:13.332553  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:13.779028  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:13.832021  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:14.278236  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:14.336102  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:14.780955  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:14.882264  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:15.278522  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:15.332449  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:15.778860  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:15.832853  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:16.306171  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:16.337537  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:16.778967  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:16.833125  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:17.278992  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:17.333125  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:17.842345  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:17.842432  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:18.286467  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:18.333364  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:18.778725  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:18.835332  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:19.277590  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:19.333880  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:19.779762  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:19.839001  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:20.280389  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:20.338553  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:20.778675  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:20.832741  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:21.278050  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:21.332874  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:21.779031  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:21.834652  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:22.458420  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:22.460309  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:22.778393  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:22.832287  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:23.278439  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:23.333110  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:23.779031  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:23.831846  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:24.277816  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:24.333225  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:24.778778  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:24.832837  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:25.280209  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:25.332742  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:25.777780  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:25.831818  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:26.278483  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:26.332991  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:26.778256  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:26.832972  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:27.280515  505701 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 11:55:27.338125  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:27.779975  505701 kapi.go:107] duration metric: took 1m15.506325373s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 11:55:27.832336  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:28.401792  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:28.833250  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:29.334609  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:29.832997  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:30.335765  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:30.832898  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:31.332588  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:31.836497  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:32.334014  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:32.833503  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:33.333090  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:33.836196  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:34.332886  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:34.833622  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:55:35.332774  505701 kapi.go:107] duration metric: took 1m19.504736622s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 11:55:40.505219  505701 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:55:40.505244  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:40.999761  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:41.502475  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:42.000423  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:42.500427  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:43.000275  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:43.499830  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:43.999577  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:44.500944  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:44.999716  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:45.500328  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:46.000659  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:46.500239  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:47.000355  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:47.499921  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:47.999898  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:48.501079  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:49.000631  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:49.500585  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:50.000389  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:50.499965  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:51.000231  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:51.500013  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:51.999675  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:52.500428  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:53.000756  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:53.500542  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:54.000631  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:54.500612  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:55.000319  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:55.499783  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:56.000498  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:56.500177  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:56.999774  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:57.500296  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:58.000221  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:58.500500  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:59.000502  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:55:59.500501  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:00.000230  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:00.499576  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:00.999693  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:01.500224  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:01.999585  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:02.500388  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:02.999952  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:03.499259  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:03.999395  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:04.500353  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:05.001947  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:05.502070  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:06.000715  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:06.499661  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:07.000352  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:07.499712  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:08.000182  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:08.499900  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:09.000033  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:09.499977  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:09.999547  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:10.500713  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:11.000731  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:11.502485  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:11.999852  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:12.499800  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:13.000673  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:13.500812  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:14.000079  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:14.499921  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:14.999523  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:15.500597  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:16.000298  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:16.499342  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:17.000323  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:17.499882  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:17.999622  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:18.500404  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:19.000926  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:19.499961  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:20.000245  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:20.499862  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:21.000317  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:21.499522  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:22.000549  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:22.501390  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:23.001114  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:23.499572  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:24.015073  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:24.500620  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:25.000191  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:25.499442  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:25.999878  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:26.500300  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:27.000319  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:27.499916  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:27.999614  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:28.500625  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:29.000533  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:29.500507  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:29.999669  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:30.499948  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:30.999609  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:31.499732  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:32.001091  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:32.499957  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:32.999815  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:33.500377  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:33.999431  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:34.500055  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:34.999776  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:35.500126  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:35.999591  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:36.499608  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:37.000674  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:37.500413  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:38.000232  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:38.500071  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:39.000080  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:39.500116  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:39.999678  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:40.500555  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:41.001451  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:41.499911  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:42.000618  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:42.499304  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:43.000074  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:43.499891  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:44.000290  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:44.501362  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:44.999522  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:45.500635  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:46.000069  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:46.500138  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:46.999360  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:47.499668  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:48.000689  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:48.508996  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:49.000028  505701 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:56:49.501732  505701 kapi.go:107] duration metric: took 2m32.005701756s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 11:56:49.503490  505701 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-825629 cluster.
	I0923 11:56:49.504722  505701 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 11:56:49.506162  505701 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 11:56:49.507752  505701 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, inspektor-gadget, metrics-server, volcano, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0923 11:56:49.509076  505701 addons.go:510] duration metric: took 2m47.443804907s for enable addons: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner inspektor-gadget metrics-server volcano yakd default-storageclass storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0923 11:56:49.509133  505701 start.go:246] waiting for cluster config update ...
	I0923 11:56:49.509162  505701 start.go:255] writing updated cluster config ...
	I0923 11:56:49.509464  505701 ssh_runner.go:195] Run: rm -f paused
	I0923 11:56:49.562548  505701 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:56:49.564102  505701 out.go:177] * Done! kubectl is now configured to use "addons-825629" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 12:06:43 addons-825629 dockerd[1199]: time="2024-09-23T12:06:43.618373208Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 12:06:43 addons-825629 dockerd[1193]: time="2024-09-23T12:06:43.739797459Z" level=info msg="ignoring event" container=56b24075dc6797d60f29aa3df5ad667fde56b4ed4eca154dd54c01b4ee0ca533 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:06:43 addons-825629 dockerd[1199]: time="2024-09-23T12:06:43.740062447Z" level=info msg="shim disconnected" id=56b24075dc6797d60f29aa3df5ad667fde56b4ed4eca154dd54c01b4ee0ca533 namespace=moby
	Sep 23 12:06:43 addons-825629 dockerd[1199]: time="2024-09-23T12:06:43.740104337Z" level=warning msg="cleaning up after shim disconnected" id=56b24075dc6797d60f29aa3df5ad667fde56b4ed4eca154dd54c01b4ee0ca533 namespace=moby
	Sep 23 12:06:43 addons-825629 dockerd[1199]: time="2024-09-23T12:06:43.740114095Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 12:06:47 addons-825629 dockerd[1199]: time="2024-09-23T12:06:47.499790433Z" level=info msg="shim disconnected" id=83170dc60acbc3421e8138550255e955cab7d0297ec2917b070446e7f9abfca8 namespace=moby
	Sep 23 12:06:47 addons-825629 dockerd[1193]: time="2024-09-23T12:06:47.500441325Z" level=info msg="ignoring event" container=83170dc60acbc3421e8138550255e955cab7d0297ec2917b070446e7f9abfca8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:06:47 addons-825629 dockerd[1199]: time="2024-09-23T12:06:47.500473708Z" level=warning msg="cleaning up after shim disconnected" id=83170dc60acbc3421e8138550255e955cab7d0297ec2917b070446e7f9abfca8 namespace=moby
	Sep 23 12:06:47 addons-825629 dockerd[1199]: time="2024-09-23T12:06:47.500746884Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 12:06:47 addons-825629 dockerd[1193]: time="2024-09-23T12:06:47.936754983Z" level=info msg="ignoring event" container=0f258588ca19cebce164b789d92665ddaf1c73ac5228e963058582a4fc5d5ebd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:06:47 addons-825629 dockerd[1199]: time="2024-09-23T12:06:47.936995042Z" level=info msg="shim disconnected" id=0f258588ca19cebce164b789d92665ddaf1c73ac5228e963058582a4fc5d5ebd namespace=moby
	Sep 23 12:06:47 addons-825629 dockerd[1199]: time="2024-09-23T12:06:47.937224977Z" level=warning msg="cleaning up after shim disconnected" id=0f258588ca19cebce164b789d92665ddaf1c73ac5228e963058582a4fc5d5ebd namespace=moby
	Sep 23 12:06:47 addons-825629 dockerd[1199]: time="2024-09-23T12:06:47.937317338Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 12:06:48 addons-825629 dockerd[1199]: time="2024-09-23T12:06:48.069773485Z" level=info msg="shim disconnected" id=e841d8b04743d8f05b3cd310389c7f61461838ed400d349aaa88a3e04fe2779e namespace=moby
	Sep 23 12:06:48 addons-825629 dockerd[1199]: time="2024-09-23T12:06:48.070531868Z" level=warning msg="cleaning up after shim disconnected" id=e841d8b04743d8f05b3cd310389c7f61461838ed400d349aaa88a3e04fe2779e namespace=moby
	Sep 23 12:06:48 addons-825629 dockerd[1193]: time="2024-09-23T12:06:48.070674132Z" level=info msg="ignoring event" container=e841d8b04743d8f05b3cd310389c7f61461838ed400d349aaa88a3e04fe2779e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:06:48 addons-825629 dockerd[1199]: time="2024-09-23T12:06:48.070874240Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 12:06:48 addons-825629 dockerd[1193]: time="2024-09-23T12:06:48.150392453Z" level=info msg="ignoring event" container=4b3dc76cff7ea75b63389db012cc839c8e2b9495ac790eac05096461a6554d6a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:06:48 addons-825629 dockerd[1199]: time="2024-09-23T12:06:48.152035549Z" level=info msg="shim disconnected" id=4b3dc76cff7ea75b63389db012cc839c8e2b9495ac790eac05096461a6554d6a namespace=moby
	Sep 23 12:06:48 addons-825629 dockerd[1199]: time="2024-09-23T12:06:48.152143424Z" level=warning msg="cleaning up after shim disconnected" id=4b3dc76cff7ea75b63389db012cc839c8e2b9495ac790eac05096461a6554d6a namespace=moby
	Sep 23 12:06:48 addons-825629 dockerd[1199]: time="2024-09-23T12:06:48.152160569Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 12:06:48 addons-825629 dockerd[1193]: time="2024-09-23T12:06:48.286596626Z" level=info msg="ignoring event" container=447140eaf1905afdca5adf18f6fd993436afbc34b3451e336a9dcd87f6fbbf20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:06:48 addons-825629 dockerd[1199]: time="2024-09-23T12:06:48.288155507Z" level=info msg="shim disconnected" id=447140eaf1905afdca5adf18f6fd993436afbc34b3451e336a9dcd87f6fbbf20 namespace=moby
	Sep 23 12:06:48 addons-825629 dockerd[1199]: time="2024-09-23T12:06:48.288385856Z" level=warning msg="cleaning up after shim disconnected" id=447140eaf1905afdca5adf18f6fd993436afbc34b3451e336a9dcd87f6fbbf20 namespace=moby
	Sep 23 12:06:48 addons-825629 dockerd[1199]: time="2024-09-23T12:06:48.288482199Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8e7ebeb00b9ad       a416a98b71e22                                                                                                                35 seconds ago      Exited              helper-pod                0                   e8c9a08e70dd9       helper-pod-delete-pvc-2838b3b1-f740-472d-b374-6eb70574df74
	6bc9b8f459165       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              39 seconds ago      Exited              busybox                   0                   598ddc0f1ba06       test-local-path
	d64dc733d1b96       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  41 seconds ago      Running             hello-world-app           0                   797a0fb34958e       hello-world-app-55bf9c44b4-98jjm
	fbb95d68de970       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              45 seconds ago      Exited              helper-pod                0                   5421e7b8f4be7       helper-pod-create-pvc-2838b3b1-f740-472d-b374-6eb70574df74
	ec7e2cbc57966       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                50 seconds ago      Running             nginx                     0                   0c43ae0ac2e70       nginx
	98ae5802a756c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   c94a02a4c404f       gcp-auth-89d5ffd79-lswn4
	b5899d75fe040       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   1f5d71fc5e51c       ingress-nginx-admission-patch-n4bbc
	49ece98b91115       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   7b70f4e144d6d       ingress-nginx-admission-create-thk6d
	abf54e8d119d4       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   6c588a7f5f2d2       storage-provisioner
	2d50e6e9b3ed6       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   cf49d409aece8       coredns-7c65d6cfc9-6xckn
	78731caf08364       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   7f10350fc2f30       kube-proxy-jktfj
	ca52d55232a19       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   61e35be9939f5       etcd-addons-825629
	5c54f26b51d42       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   231033deed525       kube-scheduler-addons-825629
	96c081616cd40       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   8e0d6d1ae2ebc       kube-controller-manager-addons-825629
	77fdbe58dbb1b       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   a76972afba03e       kube-apiserver-addons-825629
	
	
	==> coredns [2d50e6e9b3ed] <==
	[INFO] 10.244.0.21:36432 - 5666 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000131219s
	[INFO] 10.244.0.21:36432 - 50255 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000128209s
	[INFO] 10.244.0.21:36432 - 55546 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098785s
	[INFO] 10.244.0.21:36432 - 40484 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00011676s
	[INFO] 10.244.0.21:46657 - 60036 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000122584s
	[INFO] 10.244.0.21:46657 - 39337 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000137717s
	[INFO] 10.244.0.21:42951 - 48771 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000093449s
	[INFO] 10.244.0.21:46657 - 47554 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000160539s
	[INFO] 10.244.0.21:42951 - 44054 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000046234s
	[INFO] 10.244.0.21:46657 - 18021 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000176346s
	[INFO] 10.244.0.21:46657 - 63035 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063843s
	[INFO] 10.244.0.21:42951 - 29158 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043399s
	[INFO] 10.244.0.21:46657 - 2775 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000289081s
	[INFO] 10.244.0.21:46657 - 43876 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000103496s
	[INFO] 10.244.0.21:42951 - 32184 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000173359s
	[INFO] 10.244.0.21:42951 - 36516 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081974s
	[INFO] 10.244.0.21:42951 - 3020 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000065651s
	[INFO] 10.244.0.21:42951 - 22519 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069023s
	[INFO] 10.244.0.21:56986 - 26447 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101047s
	[INFO] 10.244.0.21:56986 - 56930 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065234s
	[INFO] 10.244.0.21:56986 - 28943 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059846s
	[INFO] 10.244.0.21:56986 - 40820 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055476s
	[INFO] 10.244.0.21:56986 - 52548 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063155s
	[INFO] 10.244.0.21:56986 - 15339 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000085964s
	[INFO] 10.244.0.21:56986 - 18723 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000146477s
	
	
	==> describe nodes <==
	Name:               addons-825629
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-825629
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=addons-825629
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_53_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-825629
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:53:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-825629
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:06:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:06:33 +0000   Mon, 23 Sep 2024 11:53:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:06:33 +0000   Mon, 23 Sep 2024 11:53:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:06:33 +0000   Mon, 23 Sep 2024 11:53:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:06:33 +0000   Mon, 23 Sep 2024 11:53:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    addons-825629
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 49f4be6f818241029663f03fd2527b50
	  System UUID:                49f4be6f-8182-4102-9663-f03fd2527b50
	  Boot ID:                    a7ce5a8e-e0a2-479b-8dfe-6380810548a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     hello-world-app-55bf9c44b4-98jjm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  gcp-auth                    gcp-auth-89d5ffd79-lswn4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-6xckn                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-825629                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-825629             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-825629    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-jktfj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-825629             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-825629 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-825629 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-825629 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-825629 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-825629 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-825629 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                kubelet          Node addons-825629 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-825629 event: Registered Node addons-825629 in Controller
	
	
	==> dmesg <==
	[Sep23 11:55] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.094675] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.049928] kauditd_printk_skb: 17 callbacks suppressed
	[ +10.199243] kauditd_printk_skb: 75 callbacks suppressed
	[  +5.285146] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.119235] kauditd_printk_skb: 36 callbacks suppressed
	[Sep23 11:56] kauditd_printk_skb: 28 callbacks suppressed
	[ +23.786415] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.124584] kauditd_printk_skb: 9 callbacks suppressed
	[Sep23 11:57] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.356074] kauditd_printk_skb: 2 callbacks suppressed
	[ +18.045740] kauditd_printk_skb: 20 callbacks suppressed
	[ +19.937204] kauditd_printk_skb: 21 callbacks suppressed
	[Sep23 12:01] kauditd_printk_skb: 28 callbacks suppressed
	[Sep23 12:05] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.272519] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.585122] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.371948] kauditd_printk_skb: 24 callbacks suppressed
	[Sep23 12:06] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.412985] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.436578] kauditd_printk_skb: 38 callbacks suppressed
	[  +8.295554] kauditd_printk_skb: 23 callbacks suppressed
	[ +10.529134] kauditd_printk_skb: 19 callbacks suppressed
	[  +8.415898] kauditd_printk_skb: 33 callbacks suppressed
	[  +7.458332] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [ca52d55232a1] <==
	{"level":"warn","ts":"2024-09-23T11:55:00.126678Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T11:54:59.813675Z","time spent":"312.978983ms","remote":"127.0.0.1:42604","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-09-23T11:55:22.435763Z","caller":"traceutil/trace.go:171","msg":"trace[1732614169] linearizableReadLoop","detail":"{readStateIndex:1274; appliedIndex:1273; }","duration":"201.635002ms","start":"2024-09-23T11:55:22.234108Z","end":"2024-09-23T11:55:22.435743Z","steps":["trace[1732614169] 'read index received'  (duration: 201.499859ms)","trace[1732614169] 'applied index is now lower than readState.Index'  (duration: 134.557µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T11:55:22.435784Z","caller":"traceutil/trace.go:171","msg":"trace[1597074486] transaction","detail":"{read_only:false; response_revision:1238; number_of_response:1; }","duration":"298.444072ms","start":"2024-09-23T11:55:22.137319Z","end":"2024-09-23T11:55:22.435763Z","steps":["trace[1597074486] 'process raft request'  (duration: 298.321541ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:55:22.435976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.846944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:497"}
	{"level":"info","ts":"2024-09-23T11:55:22.436006Z","caller":"traceutil/trace.go:171","msg":"trace[1776102484] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1238; }","duration":"201.894313ms","start":"2024-09-23T11:55:22.234104Z","end":"2024-09-23T11:55:22.435999Z","steps":["trace[1776102484] 'agreement among raft nodes before linearized reading'  (duration: 201.741944ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:55:22.436089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.424206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T11:55:22.436110Z","caller":"traceutil/trace.go:171","msg":"trace[309522710] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1238; }","duration":"178.454573ms","start":"2024-09-23T11:55:22.257650Z","end":"2024-09-23T11:55:22.436104Z","steps":["trace[309522710] 'agreement among raft nodes before linearized reading'  (duration: 178.409613ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:55:22.436237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.304248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-23T11:55:22.436250Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.792782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-09-23T11:55:22.436254Z","caller":"traceutil/trace.go:171","msg":"trace[1970581968] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1238; }","duration":"124.324589ms","start":"2024-09-23T11:55:22.311925Z","end":"2024-09-23T11:55:22.436249Z","steps":["trace[1970581968] 'agreement among raft nodes before linearized reading'  (duration: 124.288796ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:55:22.436265Z","caller":"traceutil/trace.go:171","msg":"trace[956443333] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:0; response_revision:1238; }","duration":"172.811377ms","start":"2024-09-23T11:55:22.263450Z","end":"2024-09-23T11:55:22.436261Z","steps":["trace[956443333] 'agreement among raft nodes before linearized reading'  (duration: 172.774516ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:57:14.527172Z","caller":"traceutil/trace.go:171","msg":"trace[1676517242] linearizableReadLoop","detail":"{readStateIndex:1653; appliedIndex:1652; }","duration":"136.756383ms","start":"2024-09-23T11:57:14.390374Z","end":"2024-09-23T11:57:14.527130Z","steps":["trace[1676517242] 'read index received'  (duration: 136.628885ms)","trace[1676517242] 'applied index is now lower than readState.Index'  (duration: 127.089µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T11:57:14.527454Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.98678ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T11:57:14.527487Z","caller":"traceutil/trace.go:171","msg":"trace[1698190739] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1592; }","duration":"137.106671ms","start":"2024-09-23T11:57:14.390369Z","end":"2024-09-23T11:57:14.527476Z","steps":["trace[1698190739] 'agreement among raft nodes before linearized reading'  (duration: 136.948103ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T11:57:14.527723Z","caller":"traceutil/trace.go:171","msg":"trace[975987896] transaction","detail":"{read_only:false; response_revision:1592; number_of_response:1; }","duration":"235.566783ms","start":"2024-09-23T11:57:14.292137Z","end":"2024-09-23T11:57:14.527704Z","steps":["trace[975987896] 'process raft request'  (duration: 234.912644ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T11:57:16.092161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.439409ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T11:57:16.092236Z","caller":"traceutil/trace.go:171","msg":"trace[1441546014] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1594; }","duration":"104.546796ms","start":"2024-09-23T11:57:15.987674Z","end":"2024-09-23T11:57:16.092221Z","steps":["trace[1441546014] 'range keys from in-memory index tree'  (duration: 104.324223ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T12:03:53.667182Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1904}
	{"level":"info","ts":"2024-09-23T12:03:53.782109Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1904,"took":"113.759811ms","hash":1785873687,"current-db-size-bytes":8986624,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":5033984,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-23T12:03:53.782259Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1785873687,"revision":1904,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T12:05:41.851687Z","caller":"traceutil/trace.go:171","msg":"trace[1492973489] linearizableReadLoop","detail":"{readStateIndex:2685; appliedIndex:2684; }","duration":"209.499676ms","start":"2024-09-23T12:05:41.642139Z","end":"2024-09-23T12:05:41.851638Z","steps":["trace[1492973489] 'read index received'  (duration: 209.194152ms)","trace[1492973489] 'applied index is now lower than readState.Index'  (duration: 304.949µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T12:05:41.851917Z","caller":"traceutil/trace.go:171","msg":"trace[1024955865] transaction","detail":"{read_only:false; response_revision:2513; number_of_response:1; }","duration":"304.115934ms","start":"2024-09-23T12:05:41.547781Z","end":"2024-09-23T12:05:41.851897Z","steps":["trace[1024955865] 'process raft request'  (duration: 303.603073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:05:41.851989Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.786905ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-23T12:05:41.852018Z","caller":"traceutil/trace.go:171","msg":"trace[111187323] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:2513; }","duration":"209.873485ms","start":"2024-09-23T12:05:41.642135Z","end":"2024-09-23T12:05:41.852008Z","steps":["trace[111187323] 'agreement among raft nodes before linearized reading'  (duration: 209.710118ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T12:05:41.852711Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T12:05:41.547765Z","time spent":"304.195419ms","remote":"127.0.0.1:42586","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2511 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> gcp-auth [98ae5802a756] <==
	2024/09/23 11:57:33 Ready to write response ...
	2024/09/23 11:57:33 Ready to marshal response ...
	2024/09/23 11:57:33 Ready to write response ...
	2024/09/23 12:05:36 Ready to marshal response ...
	2024/09/23 12:05:36 Ready to write response ...
	2024/09/23 12:05:37 Ready to marshal response ...
	2024/09/23 12:05:37 Ready to write response ...
	2024/09/23 12:05:37 Ready to marshal response ...
	2024/09/23 12:05:37 Ready to write response ...
	2024/09/23 12:05:47 Ready to marshal response ...
	2024/09/23 12:05:47 Ready to write response ...
	2024/09/23 12:05:54 Ready to marshal response ...
	2024/09/23 12:05:54 Ready to write response ...
	2024/09/23 12:06:01 Ready to marshal response ...
	2024/09/23 12:06:01 Ready to write response ...
	2024/09/23 12:06:01 Ready to marshal response ...
	2024/09/23 12:06:01 Ready to write response ...
	2024/09/23 12:06:02 Ready to marshal response ...
	2024/09/23 12:06:02 Ready to write response ...
	2024/09/23 12:06:04 Ready to marshal response ...
	2024/09/23 12:06:04 Ready to write response ...
	2024/09/23 12:06:12 Ready to marshal response ...
	2024/09/23 12:06:12 Ready to write response ...
	2024/09/23 12:06:24 Ready to marshal response ...
	2024/09/23 12:06:24 Ready to write response ...
	
	
	==> kernel <==
	 12:06:49 up 13 min,  0 users,  load average: 0.32, 0.41, 0.40
	Linux addons-825629 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [77fdbe58dbb1] <==
	W0923 11:57:25.617071       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 11:57:26.034377       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 11:57:26.235750       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0923 12:05:36.964776       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.130.81"}
	I0923 12:05:50.904894       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0923 12:05:53.997274       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 12:05:54.194919       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.4.11"}
	I0923 12:05:55.875792       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 12:05:57.008988       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 12:06:04.740695       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.71.194"}
	I0923 12:06:11.513099       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0923 12:06:29.013300       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 12:06:39.699662       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:06:39.699897       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:06:39.722187       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:06:39.724267       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:06:39.752555       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:06:39.753319       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:06:39.791146       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:06:39.791903       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 12:06:39.869028       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 12:06:39.869924       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 12:06:40.792303       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 12:06:40.865517       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 12:06:40.985507       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [96c081616cd4] <==
	E0923 12:06:40.867942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 12:06:40.987239       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:42.068284       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:42.068341       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:42.418446       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:42.418548       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:42.523975       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:42.524294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:43.501779       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:43.501919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:44.570710       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:44.570763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:44.829175       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:44.829333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:45.006652       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:45.006708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:06:47.880903       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.842µs"
	W0923 12:06:48.272951       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:48.272997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:48.367083       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:48.367169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:48.506581       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:48.506662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:48.763874       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:48.763937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [78731caf0836] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 11:54:04.184949       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 11:54:04.204924       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.2"]
	E0923 11:54:04.205001       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:54:04.278555       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 11:54:04.278600       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 11:54:04.278627       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:54:04.283457       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:54:04.283782       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:54:04.283795       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:54:04.285435       1 config.go:199] "Starting service config controller"
	I0923 11:54:04.285456       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:54:04.285473       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:54:04.285477       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:54:04.285802       1 config.go:328] "Starting node config controller"
	I0923 11:54:04.285860       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:54:04.386587       1 shared_informer.go:320] Caches are synced for node config
	I0923 11:54:04.386630       1 shared_informer.go:320] Caches are synced for service config
	I0923 11:54:04.386658       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5c54f26b51d4] <==
	E0923 11:53:55.017223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0923 11:53:55.017272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0923 11:53:55.017326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0923 11:53:55.017861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 11:53:55.842350       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 11:53:55.842406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:53:55.870608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:53:55.870653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 11:53:55.874043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:53:55.874073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:53:55.965415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:53:55.965462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:53:55.993483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:53:55.993541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:53:56.002374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 11:53:56.002427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:53:56.019427       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 11:53:56.019473       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 11:53:56.148100       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:53:56.148334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:53:56.162713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:53:56.162993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:53:56.181042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:53:56.181151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 11:53:59.186175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 12:06:44 addons-825629 kubelet[1975]: I0923 12:06:44.632033    1975 scope.go:117] "RemoveContainer" containerID="1381574e5d560a592dc04ae4071cfcd4c7e1ece6ab7e3c702b47da3724cc0378"
	Sep 23 12:06:44 addons-825629 kubelet[1975]: E0923 12:06:44.633081    1975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1381574e5d560a592dc04ae4071cfcd4c7e1ece6ab7e3c702b47da3724cc0378" containerID="1381574e5d560a592dc04ae4071cfcd4c7e1ece6ab7e3c702b47da3724cc0378"
	Sep 23 12:06:44 addons-825629 kubelet[1975]: I0923 12:06:44.633180    1975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1381574e5d560a592dc04ae4071cfcd4c7e1ece6ab7e3c702b47da3724cc0378"} err="failed to get container status \"1381574e5d560a592dc04ae4071cfcd4c7e1ece6ab7e3c702b47da3724cc0378\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1381574e5d560a592dc04ae4071cfcd4c7e1ece6ab7e3c702b47da3724cc0378"
	Sep 23 12:06:45 addons-825629 kubelet[1975]: I0923 12:06:45.619659    1975 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9bf297-f117-4d3a-a970-61f67290ba1f" path="/var/lib/kubelet/pods/ef9bf297-f117-4d3a-a970-61f67290ba1f/volumes"
	Sep 23 12:06:47 addons-825629 kubelet[1975]: I0923 12:06:47.713005    1975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbrfd\" (UniqueName: \"kubernetes.io/projected/03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde-kube-api-access-fbrfd\") pod \"03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde\" (UID: \"03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde\") "
	Sep 23 12:06:47 addons-825629 kubelet[1975]: I0923 12:06:47.713082    1975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde-gcp-creds\") pod \"03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde\" (UID: \"03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde\") "
	Sep 23 12:06:47 addons-825629 kubelet[1975]: I0923 12:06:47.713572    1975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde" (UID: "03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 12:06:47 addons-825629 kubelet[1975]: I0923 12:06:47.716620    1975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde-kube-api-access-fbrfd" (OuterVolumeSpecName: "kube-api-access-fbrfd") pod "03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde" (UID: "03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde"). InnerVolumeSpecName "kube-api-access-fbrfd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:06:47 addons-825629 kubelet[1975]: I0923 12:06:47.814391    1975 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde-gcp-creds\") on node \"addons-825629\" DevicePath \"\""
	Sep 23 12:06:47 addons-825629 kubelet[1975]: I0923 12:06:47.814430    1975 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fbrfd\" (UniqueName: \"kubernetes.io/projected/03f8bc3b-d0fd-4f0f-a3a8-ef70a19eebde-kube-api-access-fbrfd\") on node \"addons-825629\" DevicePath \"\""
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.317636    1975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdqxs\" (UniqueName: \"kubernetes.io/projected/93368544-2bdd-4676-901f-cc2b1f4cfa8a-kube-api-access-tdqxs\") pod \"93368544-2bdd-4676-901f-cc2b1f4cfa8a\" (UID: \"93368544-2bdd-4676-901f-cc2b1f4cfa8a\") "
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.320391    1975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93368544-2bdd-4676-901f-cc2b1f4cfa8a-kube-api-access-tdqxs" (OuterVolumeSpecName: "kube-api-access-tdqxs") pod "93368544-2bdd-4676-901f-cc2b1f4cfa8a" (UID: "93368544-2bdd-4676-901f-cc2b1f4cfa8a"). InnerVolumeSpecName "kube-api-access-tdqxs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.418712    1975 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tdqxs\" (UniqueName: \"kubernetes.io/projected/93368544-2bdd-4676-901f-cc2b1f4cfa8a-kube-api-access-tdqxs\") on node \"addons-825629\" DevicePath \"\""
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.519645    1975 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsmlr\" (UniqueName: \"kubernetes.io/projected/da46be15-7046-4070-9862-00f5586a04c9-kube-api-access-nsmlr\") pod \"da46be15-7046-4070-9862-00f5586a04c9\" (UID: \"da46be15-7046-4070-9862-00f5586a04c9\") "
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.522068    1975 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da46be15-7046-4070-9862-00f5586a04c9-kube-api-access-nsmlr" (OuterVolumeSpecName: "kube-api-access-nsmlr") pod "da46be15-7046-4070-9862-00f5586a04c9" (UID: "da46be15-7046-4070-9862-00f5586a04c9"). InnerVolumeSpecName "kube-api-access-nsmlr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:06:48 addons-825629 kubelet[1975]: E0923 12:06:48.608299    1975 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="f0764de4-481d-412c-ad33-3b569a0ecbc8"
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.620724    1975 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nsmlr\" (UniqueName: \"kubernetes.io/projected/da46be15-7046-4070-9862-00f5586a04c9-kube-api-access-nsmlr\") on node \"addons-825629\" DevicePath \"\""
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.683038    1975 scope.go:117] "RemoveContainer" containerID="e841d8b04743d8f05b3cd310389c7f61461838ed400d349aaa88a3e04fe2779e"
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.726615    1975 scope.go:117] "RemoveContainer" containerID="e841d8b04743d8f05b3cd310389c7f61461838ed400d349aaa88a3e04fe2779e"
	Sep 23 12:06:48 addons-825629 kubelet[1975]: E0923 12:06:48.729209    1975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e841d8b04743d8f05b3cd310389c7f61461838ed400d349aaa88a3e04fe2779e" containerID="e841d8b04743d8f05b3cd310389c7f61461838ed400d349aaa88a3e04fe2779e"
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.729241    1975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e841d8b04743d8f05b3cd310389c7f61461838ed400d349aaa88a3e04fe2779e"} err="failed to get container status \"e841d8b04743d8f05b3cd310389c7f61461838ed400d349aaa88a3e04fe2779e\": rpc error: code = Unknown desc = Error response from daemon: No such container: e841d8b04743d8f05b3cd310389c7f61461838ed400d349aaa88a3e04fe2779e"
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.729265    1975 scope.go:117] "RemoveContainer" containerID="0f258588ca19cebce164b789d92665ddaf1c73ac5228e963058582a4fc5d5ebd"
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.752761    1975 scope.go:117] "RemoveContainer" containerID="0f258588ca19cebce164b789d92665ddaf1c73ac5228e963058582a4fc5d5ebd"
	Sep 23 12:06:48 addons-825629 kubelet[1975]: E0923 12:06:48.754007    1975 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0f258588ca19cebce164b789d92665ddaf1c73ac5228e963058582a4fc5d5ebd" containerID="0f258588ca19cebce164b789d92665ddaf1c73ac5228e963058582a4fc5d5ebd"
	Sep 23 12:06:48 addons-825629 kubelet[1975]: I0923 12:06:48.754051    1975 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0f258588ca19cebce164b789d92665ddaf1c73ac5228e963058582a4fc5d5ebd"} err="failed to get container status \"0f258588ca19cebce164b789d92665ddaf1c73ac5228e963058582a4fc5d5ebd\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0f258588ca19cebce164b789d92665ddaf1c73ac5228e963058582a4fc5d5ebd"
	
	
	==> storage-provisioner [abf54e8d119d] <==
	I0923 11:54:09.315595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:54:09.347032       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:54:09.347121       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:54:09.356190       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:54:09.356341       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-825629_10124072-e526-4a43-92db-e496b3f69ee2!
	I0923 11:54:09.364729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7e8dc33-37fb-4437-ab08-59864ded6eae", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-825629_10124072-e526-4a43-92db-e496b3f69ee2 became leader
	I0923 11:54:09.456742       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-825629_10124072-e526-4a43-92db-e496b3f69ee2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-825629 -n addons-825629
helpers_test.go:261: (dbg) Run:  kubectl --context addons-825629 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-825629 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-825629 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-825629/192.168.39.2
	Start Time:       Mon, 23 Sep 2024 11:57:33 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p84qf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-p84qf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason          Age                     From               Message
	  ----     ------          ----                    ----               -------
	  Normal   Scheduled       9m16s                   default-scheduler  Successfully assigned default/busybox to addons-825629
	  Normal   SandboxChanged  9m14s                   kubelet            Pod sandbox changed, it will be killed and re-created.
	  Normal   Pulling         7m52s (x4 over 9m15s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed          7m52s (x4 over 9m15s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed          7m52s (x4 over 9m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed          7m41s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff         4m15s (x22 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (154.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-915704 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-915704 --wait=true -v=8 --alsologtostderr --driver=kvm2 : exit status 90 (2m32.111107053s)

                                                
                                                
-- stdout --
	* [multinode-915704] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "multinode-915704" primary control-plane node in "multinode-915704" cluster
	* Restarting existing kvm2 VM for "multinode-915704" ...
	* Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	* Starting "multinode-915704-m02" worker node in "multinode-915704" cluster
	* Restarting existing kvm2 VM for "multinode-915704-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.233
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:41:54.199027  533789 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:41:54.199273  533789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:41:54.199282  533789 out.go:358] Setting ErrFile to fd 2...
	I0923 12:41:54.199286  533789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:41:54.199488  533789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	I0923 12:41:54.200051  533789 out.go:352] Setting JSON to false
	I0923 12:41:54.201083  533789 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8656,"bootTime":1727086658,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:41:54.201204  533789 start.go:139] virtualization: kvm guest
	I0923 12:41:54.203731  533789 out.go:177] * [multinode-915704] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:41:54.205541  533789 notify.go:220] Checking for updates...
	I0923 12:41:54.205595  533789 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:41:54.207248  533789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:41:54.208567  533789 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 12:41:54.209811  533789 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	I0923 12:41:54.211368  533789 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:41:54.212830  533789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:41:54.214514  533789 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:41:54.215062  533789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:41:54.215136  533789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:41:54.230668  533789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I0923 12:41:54.231212  533789 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:41:54.231859  533789 main.go:141] libmachine: Using API Version  1
	I0923 12:41:54.231881  533789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:41:54.232332  533789 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:41:54.232529  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:41:54.232839  533789 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:41:54.233199  533789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:41:54.233247  533789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:41:54.248755  533789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I0923 12:41:54.249350  533789 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:41:54.249966  533789 main.go:141] libmachine: Using API Version  1
	I0923 12:41:54.249995  533789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:41:54.250378  533789 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:41:54.250588  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:41:54.287454  533789 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 12:41:54.288751  533789 start.go:297] selected driver: kvm2
	I0923 12:41:54.288772  533789 start.go:901] validating driver "kvm2" against &{Name:multinode-915704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-915704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:41:54.288917  533789 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:41:54.289279  533789 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:41:54.289376  533789 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-497735/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 12:41:54.305488  533789 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 12:41:54.306251  533789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:41:54.306289  533789 cni.go:84] Creating CNI manager for ""
	I0923 12:41:54.306342  533789 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0923 12:41:54.306410  533789 start.go:340] cluster config:
	{Name:multinode-915704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-915704 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-d
river-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:41:54.306545  533789 iso.go:125] acquiring lock: {Name:mkc30b88bda541d89938b3c13430927ceb85d23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:41:54.309205  533789 out.go:177] * Starting "multinode-915704" primary control-plane node in "multinode-915704" cluster
	I0923 12:41:54.310716  533789 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:41:54.310780  533789 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 12:41:54.310793  533789 cache.go:56] Caching tarball of preloaded images
	I0923 12:41:54.310893  533789 preload.go:172] Found /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:41:54.310908  533789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:41:54.311032  533789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/config.json ...
	I0923 12:41:54.311255  533789 start.go:360] acquireMachinesLock for multinode-915704: {Name:mk9742766ed80b377dab18455a5851b42572655c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:41:54.311309  533789 start.go:364] duration metric: took 29.682µs to acquireMachinesLock for "multinode-915704"
	I0923 12:41:54.311333  533789 start.go:96] Skipping create...Using existing machine configuration
	I0923 12:41:54.311344  533789 fix.go:54] fixHost starting: 
	I0923 12:41:54.311619  533789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:41:54.311656  533789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:41:54.331078  533789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38489
	I0923 12:41:54.331512  533789 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:41:54.332046  533789 main.go:141] libmachine: Using API Version  1
	I0923 12:41:54.332073  533789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:41:54.332523  533789 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:41:54.332817  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:41:54.332982  533789 main.go:141] libmachine: (multinode-915704) Calling .GetState
	I0923 12:41:54.335099  533789 fix.go:112] recreateIfNeeded on multinode-915704: state=Stopped err=<nil>
	I0923 12:41:54.335127  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	W0923 12:41:54.335293  533789 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 12:41:54.337325  533789 out.go:177] * Restarting existing kvm2 VM for "multinode-915704" ...
	I0923 12:41:54.338687  533789 main.go:141] libmachine: (multinode-915704) Calling .Start
	I0923 12:41:54.338938  533789 main.go:141] libmachine: (multinode-915704) Ensuring networks are active...
	I0923 12:41:54.339898  533789 main.go:141] libmachine: (multinode-915704) Ensuring network default is active
	I0923 12:41:54.340357  533789 main.go:141] libmachine: (multinode-915704) Ensuring network mk-multinode-915704 is active
	I0923 12:41:54.340903  533789 main.go:141] libmachine: (multinode-915704) Getting domain xml...
	I0923 12:41:54.341691  533789 main.go:141] libmachine: (multinode-915704) Creating domain...
	I0923 12:41:55.616048  533789 main.go:141] libmachine: (multinode-915704) Waiting to get IP...
	I0923 12:41:55.616984  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:55.617395  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:55.617511  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:55.617404  533824 retry.go:31] will retry after 204.239914ms: waiting for machine to come up
	I0923 12:41:55.822864  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:55.823348  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:55.823374  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:55.823320  533824 retry.go:31] will retry after 370.145895ms: waiting for machine to come up
	I0923 12:41:56.195174  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:56.195593  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:56.195623  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:56.195561  533824 retry.go:31] will retry after 364.424797ms: waiting for machine to come up
	I0923 12:41:56.562145  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:56.562616  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:56.562644  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:56.562545  533824 retry.go:31] will retry after 573.619472ms: waiting for machine to come up
	I0923 12:41:57.137456  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:57.137942  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:57.137966  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:57.137902  533824 retry.go:31] will retry after 504.492204ms: waiting for machine to come up
	I0923 12:41:57.643695  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:57.644065  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:57.644096  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:57.644017  533824 retry.go:31] will retry after 843.141242ms: waiting for machine to come up
	I0923 12:41:58.488971  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:58.489338  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:58.489365  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:58.489290  533824 retry.go:31] will retry after 987.20219ms: waiting for machine to come up
	I0923 12:41:59.478212  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:59.478655  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:59.478679  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:59.478620  533824 retry.go:31] will retry after 994.73521ms: waiting for machine to come up
	I0923 12:42:00.474739  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:00.475157  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:42:00.475179  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:42:00.475128  533824 retry.go:31] will retry after 1.379660959s: waiting for machine to come up
	I0923 12:42:01.856860  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:01.857506  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:42:01.857536  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:42:01.857446  533824 retry.go:31] will retry after 1.430231424s: waiting for machine to come up
	I0923 12:42:03.290730  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:03.291348  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:42:03.291378  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:42:03.291298  533824 retry.go:31] will retry after 2.739683757s: waiting for machine to come up
	I0923 12:42:06.034026  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:06.034467  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:42:06.034495  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:42:06.034422  533824 retry.go:31] will retry after 3.019160637s: waiting for machine to come up
	I0923 12:42:09.057639  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:09.057984  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:42:09.058014  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:42:09.057933  533824 retry.go:31] will retry after 4.048216952s: waiting for machine to come up
	I0923 12:42:13.111025  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.111500  533789 main.go:141] libmachine: (multinode-915704) Found IP for machine: 192.168.39.233
	I0923 12:42:13.111522  533789 main.go:141] libmachine: (multinode-915704) Reserving static IP address...
	I0923 12:42:13.111539  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has current primary IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.112051  533789 main.go:141] libmachine: (multinode-915704) Reserved static IP address: 192.168.39.233
	I0923 12:42:13.112081  533789 main.go:141] libmachine: (multinode-915704) Waiting for SSH to be available...
	I0923 12:42:13.112102  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "multinode-915704", mac: "52:54:00:1f:99:2b", ip: "192.168.39.233"} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.112140  533789 main.go:141] libmachine: (multinode-915704) DBG | skip adding static IP to network mk-multinode-915704 - found existing host DHCP lease matching {name: "multinode-915704", mac: "52:54:00:1f:99:2b", ip: "192.168.39.233"}
	I0923 12:42:13.112169  533789 main.go:141] libmachine: (multinode-915704) DBG | Getting to WaitForSSH function...
	I0923 12:42:13.114066  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.114396  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.114423  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.114635  533789 main.go:141] libmachine: (multinode-915704) DBG | Using SSH client type: external
	I0923 12:42:13.114659  533789 main.go:141] libmachine: (multinode-915704) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa (-rw-------)
	I0923 12:42:13.114683  533789 main.go:141] libmachine: (multinode-915704) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:42:13.114694  533789 main.go:141] libmachine: (multinode-915704) DBG | About to run SSH command:
	I0923 12:42:13.114706  533789 main.go:141] libmachine: (multinode-915704) DBG | exit 0
	I0923 12:42:13.243522  533789 main.go:141] libmachine: (multinode-915704) DBG | SSH cmd err, output: <nil>: 
	I0923 12:42:13.244183  533789 main.go:141] libmachine: (multinode-915704) Calling .GetConfigRaw
	I0923 12:42:13.244988  533789 main.go:141] libmachine: (multinode-915704) Calling .GetIP
	I0923 12:42:13.247814  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.248216  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.248243  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.248589  533789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/config.json ...
	I0923 12:42:13.249004  533789 machine.go:93] provisionDockerMachine start ...
	I0923 12:42:13.249056  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:13.249312  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.252129  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.252565  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.252595  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.252717  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:13.252910  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.253091  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.253284  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:13.253422  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:13.253625  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:13.253635  533789 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:42:13.362828  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:42:13.362864  533789 main.go:141] libmachine: (multinode-915704) Calling .GetMachineName
	I0923 12:42:13.363151  533789 buildroot.go:166] provisioning hostname "multinode-915704"
	I0923 12:42:13.363179  533789 main.go:141] libmachine: (multinode-915704) Calling .GetMachineName
	I0923 12:42:13.363371  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.366807  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.367279  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.367305  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.367451  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:13.367646  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.367862  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.368011  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:13.368200  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:13.368376  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:13.368388  533789 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-915704 && echo "multinode-915704" | sudo tee /etc/hostname
	I0923 12:42:13.488528  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-915704
	
	I0923 12:42:13.488570  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.491426  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.491798  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.491825  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.491983  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:13.492172  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.492342  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.492622  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:13.492823  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:13.493035  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:13.493054  533789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-915704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-915704/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-915704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:42:13.606717  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:42:13.606746  533789 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-497735/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-497735/.minikube}
	I0923 12:42:13.606788  533789 buildroot.go:174] setting up certificates
	I0923 12:42:13.606799  533789 provision.go:84] configureAuth start
	I0923 12:42:13.606809  533789 main.go:141] libmachine: (multinode-915704) Calling .GetMachineName
	I0923 12:42:13.607174  533789 main.go:141] libmachine: (multinode-915704) Calling .GetIP
	I0923 12:42:13.609974  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.610464  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.610494  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.610677  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.613122  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.613510  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.613538  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.613636  533789 provision.go:143] copyHostCerts
	I0923 12:42:13.613666  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem
	I0923 12:42:13.613698  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem, removing ...
	I0923 12:42:13.613717  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem
	I0923 12:42:13.613793  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem (1078 bytes)
	I0923 12:42:13.613907  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem
	I0923 12:42:13.613930  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem, removing ...
	I0923 12:42:13.613938  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem
	I0923 12:42:13.613968  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem (1123 bytes)
	I0923 12:42:13.614013  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem
	I0923 12:42:13.614030  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem, removing ...
	I0923 12:42:13.614035  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem
	I0923 12:42:13.614065  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem (1679 bytes)
	I0923 12:42:13.614119  533789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem org=jenkins.multinode-915704 san=[127.0.0.1 192.168.39.233 localhost minikube multinode-915704]
	I0923 12:42:13.746597  533789 provision.go:177] copyRemoteCerts
	I0923 12:42:13.746679  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:42:13.746707  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.749582  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.749961  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.749993  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.750271  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:13.750484  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.750660  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:13.750881  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa Username:docker}
	I0923 12:42:13.832378  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:42:13.832461  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:42:13.854985  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:42:13.855064  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0923 12:42:13.879070  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:42:13.879165  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:42:13.901573  533789 provision.go:87] duration metric: took 294.755765ms to configureAuth
	I0923 12:42:13.901609  533789 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:42:13.901891  533789 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:42:13.901921  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:13.902216  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.904891  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.905423  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.905444  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.905780  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:13.906002  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.906175  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.906326  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:13.906500  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:13.906711  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:13.906726  533789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:42:14.016539  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:42:14.016567  533789 buildroot.go:70] root file system type: tmpfs
	I0923 12:42:14.016689  533789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:42:14.016707  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:14.019216  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:14.019648  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:14.019673  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:14.019825  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:14.020017  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:14.020150  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:14.020295  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:14.020478  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:14.020667  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:14.020730  533789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:42:14.139367  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:42:14.139398  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:14.142782  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:14.143214  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:14.143240  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:14.143432  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:14.143649  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:14.143815  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:14.143946  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:14.144120  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:14.144291  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:14.144309  533789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:42:16.009085  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:42:16.009130  533789 machine.go:96] duration metric: took 2.760085923s to provisionDockerMachine
	I0923 12:42:16.009145  533789 start.go:293] postStartSetup for "multinode-915704" (driver="kvm2")
	I0923 12:42:16.009171  533789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:42:16.009203  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:16.009522  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:42:16.009552  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:16.012560  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.012990  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:16.013016  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.013197  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:16.013463  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:16.013662  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:16.013824  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa Username:docker}
	I0923 12:42:16.098463  533789 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:42:16.102363  533789 command_runner.go:130] > NAME=Buildroot
	I0923 12:42:16.102390  533789 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 12:42:16.102397  533789 command_runner.go:130] > ID=buildroot
	I0923 12:42:16.102405  533789 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 12:42:16.102411  533789 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 12:42:16.102492  533789 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:42:16.102517  533789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-497735/.minikube/addons for local assets ...
	I0923 12:42:16.102592  533789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-497735/.minikube/files for local assets ...
	I0923 12:42:16.102687  533789 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem -> 5050122.pem in /etc/ssl/certs
	I0923 12:42:16.102700  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem -> /etc/ssl/certs/5050122.pem
	I0923 12:42:16.102845  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:42:16.113454  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem --> /etc/ssl/certs/5050122.pem (1708 bytes)
	I0923 12:42:16.138217  533789 start.go:296] duration metric: took 129.055498ms for postStartSetup
	I0923 12:42:16.138266  533789 fix.go:56] duration metric: took 21.82692205s for fixHost
	I0923 12:42:16.138288  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:16.140945  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.141324  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:16.141356  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.141503  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:16.141728  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:16.141871  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:16.142048  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:16.142264  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:16.142500  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:16.142563  533789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:42:16.251580  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727095336.230269678
	
	I0923 12:42:16.251609  533789 fix.go:216] guest clock: 1727095336.230269678
	I0923 12:42:16.251619  533789 fix.go:229] Guest: 2024-09-23 12:42:16.230269678 +0000 UTC Remote: 2024-09-23 12:42:16.138269746 +0000 UTC m=+21.976718596 (delta=91.999932ms)
	I0923 12:42:16.251647  533789 fix.go:200] guest clock delta is within tolerance: 91.999932ms
	I0923 12:42:16.251655  533789 start.go:83] releasing machines lock for "multinode-915704", held for 21.940334209s
	I0923 12:42:16.251699  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:16.251978  533789 main.go:141] libmachine: (multinode-915704) Calling .GetIP
	I0923 12:42:16.254836  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.255280  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:16.255316  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.255454  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:16.256095  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:16.256317  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:16.256412  533789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:42:16.256476  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:16.256519  533789 ssh_runner.go:195] Run: cat /version.json
	I0923 12:42:16.256546  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:16.259190  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.259449  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.259668  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:16.259702  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.259731  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:16.259746  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.259931  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:16.260082  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:16.260109  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:16.260233  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:16.260313  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:16.260393  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa Username:docker}
	I0923 12:42:16.260439  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:16.260536  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa Username:docker}
	I0923 12:42:16.339826  533789 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 12:42:16.381656  533789 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0923 12:42:16.382565  533789 ssh_runner.go:195] Run: systemctl --version
	I0923 12:42:16.388344  533789 command_runner.go:130] > systemd 252 (252)
	I0923 12:42:16.388388  533789 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0923 12:42:16.388453  533789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:42:16.393493  533789 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 12:42:16.393543  533789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:42:16.393605  533789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:42:16.408916  533789 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 12:42:16.409146  533789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:42:16.409168  533789 start.go:495] detecting cgroup driver to use...
	I0923 12:42:16.409326  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:42:16.426571  533789 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 12:42:16.426851  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:42:16.436866  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:42:16.446911  533789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:42:16.446995  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:42:16.457035  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:42:16.467429  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:42:16.477499  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:42:16.487596  533789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:42:16.497930  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:42:16.507816  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:42:16.517902  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:42:16.527754  533789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:42:16.536705  533789 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:42:16.536769  533789 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:42:16.536829  533789 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:42:16.546547  533789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:42:16.555715  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:16.665221  533789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:42:16.688142  533789 start.go:495] detecting cgroup driver to use...
	I0923 12:42:16.688239  533789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:42:16.702081  533789 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 12:42:16.702103  533789 command_runner.go:130] > [Unit]
	I0923 12:42:16.702110  533789 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 12:42:16.702115  533789 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 12:42:16.702121  533789 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 12:42:16.702125  533789 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 12:42:16.702131  533789 command_runner.go:130] > StartLimitBurst=3
	I0923 12:42:16.702134  533789 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 12:42:16.702138  533789 command_runner.go:130] > [Service]
	I0923 12:42:16.702142  533789 command_runner.go:130] > Type=notify
	I0923 12:42:16.702147  533789 command_runner.go:130] > Restart=on-failure
	I0923 12:42:16.702159  533789 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 12:42:16.702169  533789 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 12:42:16.702179  533789 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 12:42:16.702188  533789 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 12:42:16.702197  533789 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 12:42:16.702204  533789 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 12:42:16.702230  533789 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 12:42:16.702243  533789 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 12:42:16.702254  533789 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 12:42:16.702260  533789 command_runner.go:130] > ExecStart=
	I0923 12:42:16.702282  533789 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0923 12:42:16.702294  533789 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 12:42:16.702304  533789 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 12:42:16.702317  533789 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 12:42:16.702325  533789 command_runner.go:130] > LimitNOFILE=infinity
	I0923 12:42:16.702341  533789 command_runner.go:130] > LimitNPROC=infinity
	I0923 12:42:16.702349  533789 command_runner.go:130] > LimitCORE=infinity
	I0923 12:42:16.702354  533789 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 12:42:16.702359  533789 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 12:42:16.702363  533789 command_runner.go:130] > TasksMax=infinity
	I0923 12:42:16.702366  533789 command_runner.go:130] > TimeoutStartSec=0
	I0923 12:42:16.702372  533789 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 12:42:16.702375  533789 command_runner.go:130] > Delegate=yes
	I0923 12:42:16.702380  533789 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 12:42:16.702384  533789 command_runner.go:130] > KillMode=process
	I0923 12:42:16.702388  533789 command_runner.go:130] > [Install]
	I0923 12:42:16.702397  533789 command_runner.go:130] > WantedBy=multi-user.target
	I0923 12:42:16.702469  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:42:16.715509  533789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:42:16.732245  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:42:16.744842  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:42:16.757500  533789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:42:16.784826  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:42:16.798210  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:42:16.815435  533789 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 12:42:16.815728  533789 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:42:16.819203  533789 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 12:42:16.819330  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:42:16.828230  533789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:42:16.844054  533789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:42:16.955542  533789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:42:17.077008  533789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:42:17.077189  533789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:42:17.093712  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:17.201145  533789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:42:19.616419  533789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.415229229s)
	I0923 12:42:19.616515  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 12:42:19.630219  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:42:19.643847  533789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 12:42:19.760184  533789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 12:42:19.877347  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:19.999483  533789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 12:42:20.015640  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:42:20.028927  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:20.159732  533789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 12:42:20.232463  533789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 12:42:20.232537  533789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 12:42:20.237657  533789 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 12:42:20.237701  533789 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 12:42:20.237712  533789 command_runner.go:130] > Device: 0,22	Inode: 788         Links: 1
	I0923 12:42:20.237722  533789 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 12:42:20.237731  533789 command_runner.go:130] > Access: 2024-09-23 12:42:20.152105274 +0000
	I0923 12:42:20.237739  533789 command_runner.go:130] > Modify: 2024-09-23 12:42:20.152105274 +0000
	I0923 12:42:20.237746  533789 command_runner.go:130] > Change: 2024-09-23 12:42:20.155106259 +0000
	I0923 12:42:20.237752  533789 command_runner.go:130] >  Birth: -
	I0923 12:42:20.237980  533789 start.go:563] Will wait 60s for crictl version
	I0923 12:42:20.238067  533789 ssh_runner.go:195] Run: which crictl
	I0923 12:42:20.245328  533789 command_runner.go:130] > /usr/bin/crictl
	I0923 12:42:20.245417  533789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:42:20.280616  533789 command_runner.go:130] > Version:  0.1.0
	I0923 12:42:20.280646  533789 command_runner.go:130] > RuntimeName:  docker
	I0923 12:42:20.280653  533789 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 12:42:20.280661  533789 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 12:42:20.280725  533789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 12:42:20.280795  533789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:42:20.304475  533789 command_runner.go:130] > 27.3.0
	I0923 12:42:20.304587  533789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:42:20.323643  533789 command_runner.go:130] > 27.3.0
	I0923 12:42:20.326566  533789 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 12:42:20.326616  533789 main.go:141] libmachine: (multinode-915704) Calling .GetIP
	I0923 12:42:20.329410  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:20.329811  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:20.329831  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:20.330055  533789 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:42:20.333948  533789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:42:20.346048  533789 kubeadm.go:883] updating cluster {Name:multinode-915704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-915704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb
:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:42:20.346249  533789 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:42:20.346300  533789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:42:20.362481  533789 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 12:42:20.362520  533789 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 12:42:20.362526  533789 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 12:42:20.362531  533789 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 12:42:20.362536  533789 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0923 12:42:20.362544  533789 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 12:42:20.362558  533789 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 12:42:20.362562  533789 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 12:42:20.362567  533789 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:42:20.362571  533789 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0923 12:42:20.363429  533789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0923 12:42:20.363446  533789 docker.go:615] Images already preloaded, skipping extraction
	I0923 12:42:20.363514  533789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:42:20.379264  533789 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 12:42:20.379289  533789 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 12:42:20.379296  533789 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 12:42:20.379304  533789 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 12:42:20.379310  533789 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0923 12:42:20.379317  533789 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 12:42:20.379327  533789 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 12:42:20.379334  533789 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 12:42:20.379342  533789 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:42:20.379352  533789 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0923 12:42:20.379385  533789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0923 12:42:20.379404  533789 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:42:20.379417  533789 kubeadm.go:934] updating node { 192.168.39.233 8443 v1.31.1 docker true true} ...
	I0923 12:42:20.379539  533789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-915704 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-915704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:42:20.379604  533789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 12:42:20.425812  533789 command_runner.go:130] > cgroupfs
	I0923 12:42:20.427137  533789 cni.go:84] Creating CNI manager for ""
	I0923 12:42:20.427160  533789 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0923 12:42:20.427173  533789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:42:20.427204  533789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.233 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-915704 NodeName:multinode-915704 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:42:20.427358  533789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-915704"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:42:20.427440  533789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:42:20.437270  533789 command_runner.go:130] > kubeadm
	I0923 12:42:20.437303  533789 command_runner.go:130] > kubectl
	I0923 12:42:20.437310  533789 command_runner.go:130] > kubelet
	I0923 12:42:20.437335  533789 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:42:20.437391  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 12:42:20.446462  533789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0923 12:42:20.462889  533789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:42:20.478146  533789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0923 12:42:20.495258  533789 ssh_runner.go:195] Run: grep 192.168.39.233	control-plane.minikube.internal$ /etc/hosts
	I0923 12:42:20.498869  533789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:42:20.510578  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:20.624268  533789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:42:20.641714  533789 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704 for IP: 192.168.39.233
	I0923 12:42:20.641737  533789 certs.go:194] generating shared ca certs ...
	I0923 12:42:20.641757  533789 certs.go:226] acquiring lock for ca certs: {Name:mk368fdda7ea812502dc0809d673a3fd993c0e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:42:20.641971  533789 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.key
	I0923 12:42:20.642028  533789 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.key
	I0923 12:42:20.642040  533789 certs.go:256] generating profile certs ...
	I0923 12:42:20.642165  533789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/client.key
	I0923 12:42:20.642251  533789 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/apiserver.key.a42e38d5
	I0923 12:42:20.642300  533789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/proxy-client.key
	I0923 12:42:20.642318  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:42:20.642340  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:42:20.642367  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:42:20.642382  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:42:20.642396  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:42:20.642412  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:42:20.642430  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:42:20.642454  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:42:20.642521  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/505012.pem (1338 bytes)
	W0923 12:42:20.642552  533789 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-497735/.minikube/certs/505012_empty.pem, impossibly tiny 0 bytes
	I0923 12:42:20.642563  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 12:42:20.642587  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem (1078 bytes)
	I0923 12:42:20.642618  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:42:20.642642  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem (1679 bytes)
	I0923 12:42:20.642681  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem (1708 bytes)
	I0923 12:42:20.642718  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem -> /usr/share/ca-certificates/5050122.pem
	I0923 12:42:20.642733  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:42:20.642745  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/505012.pem -> /usr/share/ca-certificates/505012.pem
	I0923 12:42:20.643544  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:42:20.670290  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:42:20.695310  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:42:20.720071  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 12:42:20.746714  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 12:42:20.771697  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:42:20.799693  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:42:20.829407  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:42:20.859249  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem --> /usr/share/ca-certificates/5050122.pem (1708 bytes)
	I0923 12:42:20.884382  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:42:20.907020  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/certs/505012.pem --> /usr/share/ca-certificates/505012.pem (1338 bytes)
	I0923 12:42:20.929183  533789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:42:20.945058  533789 ssh_runner.go:195] Run: openssl version
	I0923 12:42:20.950601  533789 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 12:42:20.950695  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/505012.pem && ln -fs /usr/share/ca-certificates/505012.pem /etc/ssl/certs/505012.pem"
	I0923 12:42:20.961560  533789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/505012.pem
	I0923 12:42:20.965948  533789 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 12:08 /usr/share/ca-certificates/505012.pem
	I0923 12:42:20.965991  533789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:08 /usr/share/ca-certificates/505012.pem
	I0923 12:42:20.966059  533789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/505012.pem
	I0923 12:42:20.971429  533789 command_runner.go:130] > 51391683
	I0923 12:42:20.971538  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/505012.pem /etc/ssl/certs/51391683.0"
	I0923 12:42:20.981632  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5050122.pem && ln -fs /usr/share/ca-certificates/5050122.pem /etc/ssl/certs/5050122.pem"
	I0923 12:42:20.991643  533789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5050122.pem
	I0923 12:42:20.995670  533789 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 12:08 /usr/share/ca-certificates/5050122.pem
	I0923 12:42:20.995699  533789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:08 /usr/share/ca-certificates/5050122.pem
	I0923 12:42:20.995737  533789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5050122.pem
	I0923 12:42:21.000794  533789 command_runner.go:130] > 3ec20f2e
	I0923 12:42:21.000858  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5050122.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:42:21.011805  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:42:21.022099  533789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:42:21.026398  533789 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:53 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:42:21.026431  533789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:53 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:42:21.026471  533789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:42:21.031633  533789 command_runner.go:130] > b5213941
	I0923 12:42:21.031700  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:42:21.042241  533789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:42:21.046308  533789 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:42:21.046329  533789 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 12:42:21.046335  533789 command_runner.go:130] > Device: 253,1	Inode: 529449      Links: 1
	I0923 12:42:21.046341  533789 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 12:42:21.046347  533789 command_runner.go:130] > Access: 2024-09-23 12:39:27.175020451 +0000
	I0923 12:42:21.046352  533789 command_runner.go:130] > Modify: 2024-09-23 12:35:06.416439194 +0000
	I0923 12:42:21.046357  533789 command_runner.go:130] > Change: 2024-09-23 12:35:06.416439194 +0000
	I0923 12:42:21.046361  533789 command_runner.go:130] >  Birth: 2024-09-23 12:35:06.416439194 +0000
	I0923 12:42:21.046418  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 12:42:21.051809  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.051862  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 12:42:21.057169  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.057233  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 12:42:21.062559  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.062620  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 12:42:21.067757  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.067975  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 12:42:21.073165  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.073222  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 12:42:21.078221  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.078295  533789 kubeadm.go:392] StartCluster: {Name:multinode-915704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-915704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:fa
lse metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:42:21.078442  533789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 12:42:21.094733  533789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:42:21.104103  533789 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0923 12:42:21.104127  533789 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0923 12:42:21.104133  533789 command_runner.go:130] > /var/lib/minikube/etcd:
	I0923 12:42:21.104136  533789 command_runner.go:130] > member
	I0923 12:42:21.104152  533789 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 12:42:21.104157  533789 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 12:42:21.104199  533789 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 12:42:21.112982  533789 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 12:42:21.113531  533789 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-915704" does not appear in /home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 12:42:21.113680  533789 kubeconfig.go:62] /home/jenkins/minikube-integration/19690-497735/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-915704" cluster setting kubeconfig missing "multinode-915704" context setting]
	I0923 12:42:21.114016  533789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/kubeconfig: {Name:mk0cef7f71c4fa7d96e459b50c6c36de6d1dd40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:42:21.114539  533789 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 12:42:21.114969  533789 kapi.go:59] client config for multinode-915704: &rest.Config{Host:"https://192.168.39.233:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/client.key", CAFile:"/home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 12:42:21.115541  533789 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 12:42:21.115868  533789 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 12:42:21.124975  533789 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.233
	I0923 12:42:21.125007  533789 kubeadm.go:1160] stopping kube-system containers ...
	I0923 12:42:21.125072  533789 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 12:42:21.145616  533789 command_runner.go:130] > 879b72b8d259
	I0923 12:42:21.145642  533789 command_runner.go:130] > d8c6fd4c3645
	I0923 12:42:21.145648  533789 command_runner.go:130] > 7fd3389600c2
	I0923 12:42:21.145653  533789 command_runner.go:130] > f514f107aa3b
	I0923 12:42:21.145662  533789 command_runner.go:130] > d38c9e4cde35
	I0923 12:42:21.145667  533789 command_runner.go:130] > e9ab80b3cbfc
	I0923 12:42:21.145672  533789 command_runner.go:130] > de517e94d278
	I0923 12:42:21.145678  533789 command_runner.go:130] > 496b1236003c
	I0923 12:42:21.145685  533789 command_runner.go:130] > 8a5b138b0124
	I0923 12:42:21.145692  533789 command_runner.go:130] > 80c39f229adc
	I0923 12:42:21.145698  533789 command_runner.go:130] > 1b119ee22f96
	I0923 12:42:21.145704  533789 command_runner.go:130] > 2b978cfcf3ae
	I0923 12:42:21.145712  533789 command_runner.go:130] > 3bb7d4eec409
	I0923 12:42:21.145721  533789 command_runner.go:130] > 40e23befdd45
	I0923 12:42:21.145727  533789 command_runner.go:130] > 71016b8c92e5
	I0923 12:42:21.145734  533789 command_runner.go:130] > 3076f80c7c38
	I0923 12:42:21.145740  533789 command_runner.go:130] > 68460215bbe1
	I0923 12:42:21.145749  533789 command_runner.go:130] > d1d519e9923c
	I0923 12:42:21.145757  533789 command_runner.go:130] > 211d988c9d96
	I0923 12:42:21.145767  533789 command_runner.go:130] > ac46f137f49c
	I0923 12:42:21.145773  533789 command_runner.go:130] > 4e46bbfe6817
	I0923 12:42:21.145779  533789 command_runner.go:130] > 7e25ba8cd0a9
	I0923 12:42:21.145788  533789 command_runner.go:130] > 8fd55670f04e
	I0923 12:42:21.145793  533789 command_runner.go:130] > 1cbcfb4b5626
	I0923 12:42:21.145808  533789 command_runner.go:130] > 7525dc942184
	I0923 12:42:21.145817  533789 command_runner.go:130] > 2cdbfd7d1582
	I0923 12:42:21.145823  533789 command_runner.go:130] > 1404684c04c1
	I0923 12:42:21.145829  533789 command_runner.go:130] > e5fb23e4e105
	I0923 12:42:21.145834  533789 command_runner.go:130] > 81597a6b6693
	I0923 12:42:21.145838  533789 command_runner.go:130] > 45b8c1866fc9
	I0923 12:42:21.145844  533789 command_runner.go:130] > 137cd5a0f196
	I0923 12:42:21.145874  533789 docker.go:483] Stopping containers: [879b72b8d259 d8c6fd4c3645 7fd3389600c2 f514f107aa3b d38c9e4cde35 e9ab80b3cbfc de517e94d278 496b1236003c 8a5b138b0124 80c39f229adc 1b119ee22f96 2b978cfcf3ae 3bb7d4eec409 40e23befdd45 71016b8c92e5 3076f80c7c38 68460215bbe1 d1d519e9923c 211d988c9d96 ac46f137f49c 4e46bbfe6817 7e25ba8cd0a9 8fd55670f04e 1cbcfb4b5626 7525dc942184 2cdbfd7d1582 1404684c04c1 e5fb23e4e105 81597a6b6693 45b8c1866fc9 137cd5a0f196]
	I0923 12:42:21.145958  533789 ssh_runner.go:195] Run: docker stop 879b72b8d259 d8c6fd4c3645 7fd3389600c2 f514f107aa3b d38c9e4cde35 e9ab80b3cbfc de517e94d278 496b1236003c 8a5b138b0124 80c39f229adc 1b119ee22f96 2b978cfcf3ae 3bb7d4eec409 40e23befdd45 71016b8c92e5 3076f80c7c38 68460215bbe1 d1d519e9923c 211d988c9d96 ac46f137f49c 4e46bbfe6817 7e25ba8cd0a9 8fd55670f04e 1cbcfb4b5626 7525dc942184 2cdbfd7d1582 1404684c04c1 e5fb23e4e105 81597a6b6693 45b8c1866fc9 137cd5a0f196
	I0923 12:42:21.168377  533789 command_runner.go:130] > 879b72b8d259
	I0923 12:42:21.168407  533789 command_runner.go:130] > d8c6fd4c3645
	I0923 12:42:21.168420  533789 command_runner.go:130] > 7fd3389600c2
	I0923 12:42:21.168425  533789 command_runner.go:130] > f514f107aa3b
	I0923 12:42:21.168430  533789 command_runner.go:130] > d38c9e4cde35
	I0923 12:42:21.168434  533789 command_runner.go:130] > e9ab80b3cbfc
	I0923 12:42:21.168439  533789 command_runner.go:130] > de517e94d278
	I0923 12:42:21.168442  533789 command_runner.go:130] > 496b1236003c
	I0923 12:42:21.168446  533789 command_runner.go:130] > 8a5b138b0124
	I0923 12:42:21.168454  533789 command_runner.go:130] > 80c39f229adc
	I0923 12:42:21.168463  533789 command_runner.go:130] > 1b119ee22f96
	I0923 12:42:21.168471  533789 command_runner.go:130] > 2b978cfcf3ae
	I0923 12:42:21.168478  533789 command_runner.go:130] > 3bb7d4eec409
	I0923 12:42:21.168486  533789 command_runner.go:130] > 40e23befdd45
	I0923 12:42:21.168497  533789 command_runner.go:130] > 71016b8c92e5
	I0923 12:42:21.168504  533789 command_runner.go:130] > 3076f80c7c38
	I0923 12:42:21.168509  533789 command_runner.go:130] > 68460215bbe1
	I0923 12:42:21.168513  533789 command_runner.go:130] > d1d519e9923c
	I0923 12:42:21.168517  533789 command_runner.go:130] > 211d988c9d96
	I0923 12:42:21.168522  533789 command_runner.go:130] > ac46f137f49c
	I0923 12:42:21.168529  533789 command_runner.go:130] > 4e46bbfe6817
	I0923 12:42:21.168533  533789 command_runner.go:130] > 7e25ba8cd0a9
	I0923 12:42:21.168538  533789 command_runner.go:130] > 8fd55670f04e
	I0923 12:42:21.168543  533789 command_runner.go:130] > 1cbcfb4b5626
	I0923 12:42:21.168553  533789 command_runner.go:130] > 7525dc942184
	I0923 12:42:21.168561  533789 command_runner.go:130] > 2cdbfd7d1582
	I0923 12:42:21.168571  533789 command_runner.go:130] > 1404684c04c1
	I0923 12:42:21.168579  533789 command_runner.go:130] > e5fb23e4e105
	I0923 12:42:21.168588  533789 command_runner.go:130] > 81597a6b6693
	I0923 12:42:21.168596  533789 command_runner.go:130] > 45b8c1866fc9
	I0923 12:42:21.168606  533789 command_runner.go:130] > 137cd5a0f196
	I0923 12:42:21.168687  533789 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 12:42:21.183915  533789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:42:21.192942  533789 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0923 12:42:21.192975  533789 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0923 12:42:21.192986  533789 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0923 12:42:21.192998  533789 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:42:21.193268  533789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:42:21.193294  533789 kubeadm.go:157] found existing configuration files:
	
	I0923 12:42:21.193341  533789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:42:21.201424  533789 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:42:21.201464  533789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:42:21.201508  533789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:42:21.209737  533789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:42:21.217567  533789 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:42:21.217609  533789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:42:21.217648  533789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:42:21.225920  533789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:42:21.233973  533789 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:42:21.234025  533789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:42:21.234079  533789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:42:21.242621  533789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:42:21.250917  533789 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:42:21.250956  533789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:42:21.250997  533789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:42:21.259635  533789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:42:21.268532  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:21.388694  533789 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:42:21.388900  533789 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0923 12:42:21.389024  533789 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0923 12:42:21.389232  533789 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 12:42:21.389578  533789 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0923 12:42:21.389782  533789 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0923 12:42:21.390472  533789 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0923 12:42:21.390559  533789 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0923 12:42:21.390725  533789 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0923 12:42:21.390902  533789 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 12:42:21.391054  533789 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 12:42:21.391399  533789 command_runner.go:130] > [certs] Using the existing "sa" key
	I0923 12:42:21.392746  533789 command_runner.go:130] ! W0923 12:42:21.365889    1375 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:21.392783  533789 command_runner.go:130] ! W0923 12:42:21.366709    1375 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:21.392819  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:21.431061  533789 command_runner.go:130] ! W0923 12:42:21.411249    1380 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:21.431746  533789 command_runner.go:130] ! W0923 12:42:21.412124    1380 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.145915  533789 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:42:22.145951  533789 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:42:22.145961  533789 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:42:22.145970  533789 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:42:22.145980  533789 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:42:22.145988  533789 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:42:22.146028  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:22.185467  533789 command_runner.go:130] ! W0923 12:42:22.165916    1385 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.186157  533789 command_runner.go:130] ! W0923 12:42:22.166715    1385 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.344824  533789 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:42:22.344860  533789 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:42:22.344868  533789 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 12:42:22.344900  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:22.396750  533789 command_runner.go:130] ! W0923 12:42:22.377224    1411 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.397566  533789 command_runner.go:130] ! W0923 12:42:22.378173    1411 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.412204  533789 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:42:22.412237  533789 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:42:22.412248  533789 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:42:22.412263  533789 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:42:22.412362  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:22.517883  533789 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:42:22.534639  533789 command_runner.go:130] ! W0923 12:42:22.495057    1419 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.534685  533789 command_runner.go:130] ! W0923 12:42:22.495654    1419 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.534727  533789 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:42:22.534830  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:42:23.035840  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:42:23.535051  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:42:24.034928  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:42:24.535177  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:42:24.562048  533789 command_runner.go:130] > 1725
	I0923 12:42:24.562121  533789 api_server.go:72] duration metric: took 2.027393055s to wait for apiserver process to appear ...
	I0923 12:42:24.562137  533789 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:42:24.562170  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:24.562781  533789 api_server.go:269] stopped: https://192.168.39.233:8443/healthz: Get "https://192.168.39.233:8443/healthz": dial tcp 192.168.39.233:8443: connect: connection refused
	I0923 12:42:25.062528  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:27.282012  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 12:42:27.282045  533789 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 12:42:27.282060  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:27.364123  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 12:42:27.364158  533789 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 12:42:27.562477  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:27.572417  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 12:42:27.572444  533789 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 12:42:28.063130  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:28.067571  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 12:42:28.067604  533789 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 12:42:28.562227  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:28.566617  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 200:
	ok
	I0923 12:42:28.566709  533789 round_trippers.go:463] GET https://192.168.39.233:8443/version
	I0923 12:42:28.566721  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:28.566731  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:28.566737  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:28.575825  533789 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 12:42:28.575852  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:28.575863  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:28 GMT
	I0923 12:42:28.575871  533789 round_trippers.go:580]     Audit-Id: f38d41fb-0b76-4626-859f-b3e0af1123c3
	I0923 12:42:28.575876  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:28.575880  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:28.575883  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:28.575887  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:28.575891  533789 round_trippers.go:580]     Content-Length: 263
	I0923 12:42:28.575914  533789 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 12:42:28.576071  533789 api_server.go:141] control plane version: v1.31.1
	I0923 12:42:28.576094  533789 api_server.go:131] duration metric: took 4.013948446s to wait for apiserver health ...
	I0923 12:42:28.576115  533789 cni.go:84] Creating CNI manager for ""
	I0923 12:42:28.576125  533789 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0923 12:42:28.577866  533789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 12:42:28.579234  533789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 12:42:28.588805  533789 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0923 12:42:28.588828  533789 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0923 12:42:28.588835  533789 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0923 12:42:28.588841  533789 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 12:42:28.588846  533789 command_runner.go:130] > Access: 2024-09-23 12:42:05.238234495 +0000
	I0923 12:42:28.588852  533789 command_runner.go:130] > Modify: 2024-09-20 04:01:25.000000000 +0000
	I0923 12:42:28.588857  533789 command_runner.go:130] > Change: 2024-09-23 12:42:04.205625858 +0000
	I0923 12:42:28.588860  533789 command_runner.go:130] >  Birth: -
	I0923 12:42:28.589146  533789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 12:42:28.589165  533789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 12:42:28.623790  533789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 12:42:28.977419  533789 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0923 12:42:28.998087  533789 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0923 12:42:29.079465  533789 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0923 12:42:29.128902  533789 command_runner.go:130] > daemonset.apps/kindnet configured
	I0923 12:42:29.130730  533789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:42:29.130828  533789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 12:42:29.130856  533789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 12:42:29.130939  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:42:29.130948  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.130956  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.130960  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.134526  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:29.134552  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.134563  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.134569  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.134577  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.134581  533789 round_trippers.go:580]     Audit-Id: 7857f266-3c2f-46bc-b251-f72a7a987416
	I0923 12:42:29.134585  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.134589  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.135260  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1151"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90669 chars]
	I0923 12:42:29.140173  533789 system_pods.go:59] 12 kube-system pods found
	I0923 12:42:29.140202  533789 system_pods.go:61] "coredns-7c65d6cfc9-s5jv2" [0dc645c9-049b-41b4-abb9-efb0c3496da5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 12:42:29.140210  533789 system_pods.go:61] "etcd-multinode-915704" [298e300f-3a4d-4d3c-803d-d4aa5e369e92] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0923 12:42:29.140216  533789 system_pods.go:61] "kindnet-cddh6" [f28822f1-bc2c-491a-b022-35c17323bab5] Running
	I0923 12:42:29.140224  533789 system_pods.go:61] "kindnet-kt7cw" [130be908-3588-4c06-8595-64df636abc2b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0923 12:42:29.140229  533789 system_pods.go:61] "kindnet-lb8gc" [b3215e24-3c69-4da8-8b5e-db638532efe2] Running
	I0923 12:42:29.140240  533789 system_pods.go:61] "kube-apiserver-multinode-915704" [2c5266db-b2d2-41ac-8bf7-eda1b883d3e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 12:42:29.140255  533789 system_pods.go:61] "kube-controller-manager-multinode-915704" [b95455eb-960c-44bf-9c6d-b39459f4c498] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 12:42:29.140262  533789 system_pods.go:61] "kube-proxy-hgdzz" [c9ae5011-0233-4713-83c0-5bbc9829abf9] Running
	I0923 12:42:29.140266  533789 system_pods.go:61] "kube-proxy-jthg2" [5bb0bd6f-f3b5-4750-bf9c-dd24b581e10f] Running
	I0923 12:42:29.140271  533789 system_pods.go:61] "kube-proxy-rmgjt" [d5d86b98-706f-411f-8209-017ecf7d533f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0923 12:42:29.140280  533789 system_pods.go:61] "kube-scheduler-multinode-915704" [6fdd28a4-9d1c-47b1-b14c-212986f47650] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0923 12:42:29.140288  533789 system_pods.go:61] "storage-provisioner" [ec90818c-184f-4066-a5c9-f4875d0b1354] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 12:42:29.140294  533789 system_pods.go:74] duration metric: took 9.545933ms to wait for pod list to return data ...
	I0923 12:42:29.140304  533789 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:42:29.140365  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes
	I0923 12:42:29.140374  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.140381  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.140384  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.143494  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:29.143518  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.143528  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.143534  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.143537  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.143541  533789 round_trippers.go:580]     Audit-Id: 1fdd96ce-c597-45be-9f42-3ea774de53ce
	I0923 12:42:29.143546  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.143549  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.143727  533789 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1151"},"items":[{"metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10145 chars]
	I0923 12:42:29.144443  533789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:42:29.144467  533789 node_conditions.go:123] node cpu capacity is 2
	I0923 12:42:29.144482  533789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:42:29.144492  533789 node_conditions.go:123] node cpu capacity is 2
	I0923 12:42:29.144498  533789 node_conditions.go:105] duration metric: took 4.189809ms to run NodePressure ...
	I0923 12:42:29.144521  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:29.187820  533789 command_runner.go:130] ! W0923 12:42:29.169883    2180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:29.188540  533789 command_runner.go:130] ! W0923 12:42:29.170720    2180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:29.501847  533789 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0923 12:42:29.501886  533789 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0923 12:42:29.501921  533789 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0923 12:42:29.502053  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0923 12:42:29.502071  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.502082  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.502087  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.508395  533789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:42:29.508417  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.508424  533789 round_trippers.go:580]     Audit-Id: edb8cc42-b9b2-4ed2-a234-dd66172bc585
	I0923 12:42:29.508429  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.508433  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.508436  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.508438  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.508441  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.508801  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1156"},"items":[{"metadata":{"name":"etcd-multinode-915704","namespace":"kube-system","uid":"298e300f-3a4d-4d3c-803d-d4aa5e369e92","resourceVersion":"1143","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.233:2379","kubernetes.io/config.hash":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.mirror":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.seen":"2024-09-23T12:35:14.769599942Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31285 chars]
	I0923 12:42:29.509913  533789 kubeadm.go:739] kubelet initialised
	I0923 12:42:29.509936  533789 kubeadm.go:740] duration metric: took 8.003121ms waiting for restarted kubelet to initialise ...
	I0923 12:42:29.509947  533789 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:42:29.510028  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:42:29.510039  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.510050  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.510057  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.513734  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:29.513758  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.513769  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.513774  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.513785  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.513791  533789 round_trippers.go:580]     Audit-Id: abf033d5-0214-4fbf-ae69-23f900e14896
	I0923 12:42:29.513795  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.513799  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.515330  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1156"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90076 chars]
	I0923 12:42:29.518838  533789 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.518952  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:29.518963  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.518973  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.518979  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.521631  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:29.521645  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.521652  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.521656  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.521660  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.521663  533789 round_trippers.go:580]     Audit-Id: 6028a39d-10a3-46a8-a268-79ddb5e78a08
	I0923 12:42:29.521666  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.521669  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.522189  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:29.522643  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:29.522658  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.522665  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.522671  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.524603  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:29.524620  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.524629  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.524634  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.524639  533789 round_trippers.go:580]     Audit-Id: 7cfbca6d-829d-4637-9611-c4f81fbdd596
	I0923 12:42:29.524643  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.524646  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.524649  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.524813  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I0923 12:42:29.525206  533789 pod_ready.go:98] node "multinode-915704" hosting pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.525228  533789 pod_ready.go:82] duration metric: took 6.364909ms for pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:29.525237  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.525246  533789 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.525300  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-915704
	I0923 12:42:29.525308  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.525315  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.525318  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.527610  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:29.527623  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.527631  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.527636  533789 round_trippers.go:580]     Audit-Id: f34c6907-7136-476b-adcc-9dd51f0fe40c
	I0923 12:42:29.527643  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.527647  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.527652  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.527657  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.528004  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-915704","namespace":"kube-system","uid":"298e300f-3a4d-4d3c-803d-d4aa5e369e92","resourceVersion":"1143","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.233:2379","kubernetes.io/config.hash":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.mirror":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.seen":"2024-09-23T12:35:14.769599942Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6910 chars]
	I0923 12:42:29.528376  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:29.528389  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.528396  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.528399  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.530284  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:29.530309  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.530317  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.530322  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.530325  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.530330  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.530334  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.530337  533789 round_trippers.go:580]     Audit-Id: 6be7f4ef-e443-4e48-97fc-0258f0b4abc0
	I0923 12:42:29.530466  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I0923 12:42:29.530869  533789 pod_ready.go:98] node "multinode-915704" hosting pod "etcd-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.530894  533789 pod_ready.go:82] duration metric: took 5.640622ms for pod "etcd-multinode-915704" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:29.530908  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "etcd-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.530930  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.530990  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-915704
	I0923 12:42:29.530998  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.531004  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.531008  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.532910  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:29.532922  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.532930  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.532935  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.532940  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.532944  533789 round_trippers.go:580]     Audit-Id: 517e19be-8ed0-450a-8d33-f09a5895cd7b
	I0923 12:42:29.532948  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.532953  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.533117  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-915704","namespace":"kube-system","uid":"2c5266db-b2d2-41ac-8bf7-eda1b883d3e3","resourceVersion":"1141","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.233:8443","kubernetes.io/config.hash":"3115e5dacc8088b6f9144058d3597214","kubernetes.io/config.mirror":"3115e5dacc8088b6f9144058d3597214","kubernetes.io/config.seen":"2024-09-23T12:35:14.769595152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8156 chars]
	I0923 12:42:29.533623  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:29.533642  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.533652  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.533659  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.535470  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:29.535483  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.535491  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.535496  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.535501  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.535504  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.535507  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.535512  533789 round_trippers.go:580]     Audit-Id: bc3c8040-96d8-4206-b5bb-b44df2247faf
	I0923 12:42:29.535636  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I0923 12:42:29.535977  533789 pod_ready.go:98] node "multinode-915704" hosting pod "kube-apiserver-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.536002  533789 pod_ready.go:82] duration metric: took 5.060553ms for pod "kube-apiserver-multinode-915704" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:29.536017  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "kube-apiserver-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.536026  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.536125  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-915704
	I0923 12:42:29.536140  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.536150  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.536161  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.538067  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:29.538081  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.538088  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.538093  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.538099  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.538103  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.538108  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.538131  533789 round_trippers.go:580]     Audit-Id: 6286f72d-605c-4c1f-bbff-417dd4ab934a
	I0923 12:42:29.538241  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-915704","namespace":"kube-system","uid":"b95455eb-960c-44bf-9c6d-b39459f4c498","resourceVersion":"1142","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02fde30fd2ad3cda5e3cacafb6edf88d","kubernetes.io/config.mirror":"02fde30fd2ad3cda5e3cacafb6edf88d","kubernetes.io/config.seen":"2024-09-23T12:35:14.769598186Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0923 12:42:29.538724  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:29.538774  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.538785  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.538801  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.540840  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:29.540854  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.540860  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.540865  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.540867  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.540870  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.540873  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.540877  533789 round_trippers.go:580]     Audit-Id: 367f5179-d2ec-4acd-84c0-cbdb1608f458
	I0923 12:42:29.541020  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I0923 12:42:29.541322  533789 pod_ready.go:98] node "multinode-915704" hosting pod "kube-controller-manager-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.541343  533789 pod_ready.go:82] duration metric: took 5.305107ms for pod "kube-controller-manager-multinode-915704" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:29.541355  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "kube-controller-manager-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.541365  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hgdzz" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.731843  533789 request.go:632] Waited for 190.380157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgdzz
	I0923 12:42:29.731917  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgdzz
	I0923 12:42:29.731925  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.731934  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.731940  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.734735  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:29.734775  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.734785  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.734792  533789 round_trippers.go:580]     Audit-Id: b670af2b-982f-4ca8-853b-928c422e59a5
	I0923 12:42:29.734795  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.734801  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.734805  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.734809  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.735012  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgdzz","generateName":"kube-proxy-","namespace":"kube-system","uid":"c9ae5011-0233-4713-83c0-5bbc9829abf9","resourceVersion":"991","creationTimestamp":"2024-09-23T12:36:10Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:36:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6207 chars]
	I0923 12:42:29.931876  533789 request.go:632] Waited for 196.408791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m02
	I0923 12:42:29.931987  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m02
	I0923 12:42:29.931995  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.932007  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.932016  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.934531  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:29.934563  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.934573  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.934581  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.934588  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.934592  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.934597  533789 round_trippers.go:580]     Audit-Id: 927cf82d-fdec-4696-8b85-cf4a9db2589d
	I0923 12:42:29.934601  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.934797  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704-m02","uid":"aee80d3c-b81a-428e-9a4a-6e531d5a77ec","resourceVersion":"1015","creationTimestamp":"2024-09-23T12:40:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T12_40_23_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:40:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3814 chars]
	I0923 12:42:29.935137  533789 pod_ready.go:93] pod "kube-proxy-hgdzz" in "kube-system" namespace has status "Ready":"True"
	I0923 12:42:29.935160  533789 pod_ready.go:82] duration metric: took 393.782321ms for pod "kube-proxy-hgdzz" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.935174  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jthg2" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:30.131376  533789 request.go:632] Waited for 196.095228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jthg2
	I0923 12:42:30.131455  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jthg2
	I0923 12:42:30.131464  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:30.131475  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:30.131485  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:30.134642  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:30.134667  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:30.134676  533789 round_trippers.go:580]     Audit-Id: 020885f6-0e70-405a-9bbb-735061f7cd86
	I0923 12:42:30.134682  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:30.134686  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:30.134689  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:30.134695  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:30.134700  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:30 GMT
	I0923 12:42:30.135172  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jthg2","generateName":"kube-proxy-","namespace":"kube-system","uid":"5bb0bd6f-f3b5-4750-bf9c-dd24b581e10f","resourceVersion":"1090","creationTimestamp":"2024-09-23T12:37:12Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6203 chars]
	I0923 12:42:30.331096  533789 request.go:632] Waited for 195.293446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m03
	I0923 12:42:30.331175  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m03
	I0923 12:42:30.331181  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:30.331190  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:30.331195  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:30.333523  533789 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0923 12:42:30.333549  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:30.333556  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:30.333561  533789 round_trippers.go:580]     Content-Length: 210
	I0923 12:42:30.333564  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:30 GMT
	I0923 12:42:30.333567  533789 round_trippers.go:580]     Audit-Id: 300597c5-eada-459b-805b-c35430712c2a
	I0923 12:42:30.333571  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:30.333574  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:30.333578  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:30.333605  533789 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-915704-m03\" not found","reason":"NotFound","details":{"name":"multinode-915704-m03","kind":"nodes"},"code":404}
	I0923 12:42:30.333816  533789 pod_ready.go:98] node "multinode-915704-m03" hosting pod "kube-proxy-jthg2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-915704-m03": nodes "multinode-915704-m03" not found
	I0923 12:42:30.333832  533789 pod_ready.go:82] duration metric: took 398.650792ms for pod "kube-proxy-jthg2" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:30.333841  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704-m03" hosting pod "kube-proxy-jthg2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-915704-m03": nodes "multinode-915704-m03" not found
	I0923 12:42:30.333848  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rmgjt" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:30.530978  533789 request.go:632] Waited for 197.040271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rmgjt
	I0923 12:42:30.531100  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rmgjt
	I0923 12:42:30.531109  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:30.531121  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:30.531125  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:30.533973  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:30.533993  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:30.534000  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:30.534004  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:30 GMT
	I0923 12:42:30.534008  533789 round_trippers.go:580]     Audit-Id: 62f677e2-9ccb-486e-a8e6-0fe4db71017c
	I0923 12:42:30.534011  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:30.534014  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:30.534018  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:30.534387  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rmgjt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5d86b98-706f-411f-8209-017ecf7d533f","resourceVersion":"1152","creationTimestamp":"2024-09-23T12:35:19Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0923 12:42:30.731198  533789 request.go:632] Waited for 196.327014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:30.731316  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:30.731324  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:30.731337  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:30.731343  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:30.734011  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:30.734040  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:30.734061  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:30.734068  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:30 GMT
	I0923 12:42:30.734073  533789 round_trippers.go:580]     Audit-Id: 9cf1d51c-dfd7-493e-b12d-d62198008d42
	I0923 12:42:30.734078  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:30.734082  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:30.734087  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:30.734207  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I0923 12:42:30.734626  533789 pod_ready.go:98] node "multinode-915704" hosting pod "kube-proxy-rmgjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:30.734654  533789 pod_ready.go:82] duration metric: took 400.794498ms for pod "kube-proxy-rmgjt" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:30.734666  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "kube-proxy-rmgjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:30.734677  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:30.931560  533789 request.go:632] Waited for 196.800785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-915704
	I0923 12:42:30.931663  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-915704
	I0923 12:42:30.931672  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:30.931685  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:30.931694  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:30.951274  533789 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0923 12:42:30.951311  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:30.951320  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:30.951323  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:30.951326  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:30.951330  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:30.951333  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:30 GMT
	I0923 12:42:30.951336  533789 round_trippers.go:580]     Audit-Id: 58b43bc2-e600-40a7-9e27-c48bf1945827
	I0923 12:42:30.953204  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-915704","namespace":"kube-system","uid":"6fdd28a4-9d1c-47b1-b14c-212986f47650","resourceVersion":"1146","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f436c981b3942bad9048e7a5ca8911e5","kubernetes.io/config.mirror":"f436c981b3942bad9048e7a5ca8911e5","kubernetes.io/config.seen":"2024-09-23T12:35:14.769599203Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0923 12:42:31.131685  533789 request.go:632] Waited for 177.839386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:31.131755  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:31.131762  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:31.131774  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:31.131784  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:31.136470  533789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:42:31.136494  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:31.136501  533789 round_trippers.go:580]     Audit-Id: 223a0b1c-d047-4038-bdcd-84f0812fce5d
	I0923 12:42:31.136506  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:31.136509  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:31.136512  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:31.136514  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:31.136517  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:31 GMT
	I0923 12:42:31.137033  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:31.137382  533789 pod_ready.go:98] node "multinode-915704" hosting pod "kube-scheduler-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:31.137406  533789 pod_ready.go:82] duration metric: took 402.720727ms for pod "kube-scheduler-multinode-915704" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:31.137419  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "kube-scheduler-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:31.137432  533789 pod_ready.go:39] duration metric: took 1.627474764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:42:31.137457  533789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:42:31.149418  533789 command_runner.go:130] > -16
	I0923 12:42:31.149455  533789 ops.go:34] apiserver oom_adj: -16
	I0923 12:42:31.149463  533789 kubeadm.go:597] duration metric: took 10.045299949s to restartPrimaryControlPlane
	I0923 12:42:31.149473  533789 kubeadm.go:394] duration metric: took 10.071191225s to StartCluster
	I0923 12:42:31.149499  533789 settings.go:142] acquiring lock: {Name:mke8a2c3e1b68f8bfc3d2a76cd3ad640f66f3e7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:42:31.149584  533789 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 12:42:31.150356  533789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/kubeconfig: {Name:mk0cef7f71c4fa7d96e459b50c6c36de6d1dd40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:42:31.150592  533789 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:42:31.150710  533789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 12:42:31.150851  533789 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:42:31.153013  533789 out.go:177] * Enabled addons: 
	I0923 12:42:31.153017  533789 out.go:177] * Verifying Kubernetes components...
	I0923 12:42:31.154079  533789 addons.go:510] duration metric: took 3.376252ms for enable addons: enabled=[]
	I0923 12:42:31.154161  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:31.329069  533789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:42:31.345078  533789 node_ready.go:35] waiting up to 6m0s for node "multinode-915704" to be "Ready" ...
	I0923 12:42:31.345228  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:31.345242  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:31.345252  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:31.345261  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:31.347704  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:31.347728  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:31.347738  533789 round_trippers.go:580]     Audit-Id: 61422360-1a2c-4cab-8779-e131b5dbcd38
	I0923 12:42:31.347744  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:31.347750  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:31.347756  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:31.347765  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:31.347770  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:31 GMT
	I0923 12:42:31.348180  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:31.845956  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:31.845983  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:31.845992  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:31.845997  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:31.848549  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:31.848578  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:31.848589  533789 round_trippers.go:580]     Audit-Id: afd8cdd9-02d3-440c-9a71-bce4976fb9ed
	I0923 12:42:31.848594  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:31.848599  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:31.848605  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:31.848609  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:31.848613  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:31 GMT
	I0923 12:42:31.848801  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:32.345451  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:32.345485  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:32.345495  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:32.345499  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:32.348058  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:32.348091  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:32.348102  533789 round_trippers.go:580]     Audit-Id: d7101898-bf9f-494d-a27c-ae7434f31098
	I0923 12:42:32.348107  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:32.348112  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:32.348116  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:32.348121  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:32.348127  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:32 GMT
	I0923 12:42:32.348297  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:32.846036  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:32.846068  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:32.846079  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:32.846085  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:32.849605  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:32.849627  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:32.849636  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:32.849640  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:32.849643  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:32.849646  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:32 GMT
	I0923 12:42:32.849650  533789 round_trippers.go:580]     Audit-Id: bd7ae99b-1093-4334-846e-9daa5f8db999
	I0923 12:42:32.849655  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:32.850210  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:33.345978  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:33.346010  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:33.346022  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:33.346027  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:33.349914  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:33.349945  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:33.349955  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:33 GMT
	I0923 12:42:33.349961  533789 round_trippers.go:580]     Audit-Id: dfc25ce5-3c33-4dd3-9a80-88a36ce3daea
	I0923 12:42:33.349965  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:33.349971  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:33.349974  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:33.349979  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:33.350193  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:33.350667  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:33.845896  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:33.845921  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:33.845930  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:33.845935  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:33.848565  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:33.848592  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:33.848598  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:33 GMT
	I0923 12:42:33.848602  533789 round_trippers.go:580]     Audit-Id: 032176ac-a145-4b9c-b718-02a6b226c5da
	I0923 12:42:33.848606  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:33.848608  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:33.848611  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:33.848614  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:33.848731  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:34.345338  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:34.345362  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:34.345370  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:34.345375  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:34.348151  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:34.348171  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:34.348178  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:34.348182  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:34.348185  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:34.348190  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:34.348195  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:34 GMT
	I0923 12:42:34.348197  533789 round_trippers.go:580]     Audit-Id: 0fff79e0-85dd-42b9-83e4-4d18c587b97d
	I0923 12:42:34.348323  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:34.846105  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:34.846134  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:34.846144  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:34.846148  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:34.848587  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:34.848608  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:34.848616  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:34.848620  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:34.848623  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:34 GMT
	I0923 12:42:34.848626  533789 round_trippers.go:580]     Audit-Id: 5dbef401-4301-43ff-b85d-298a451539d7
	I0923 12:42:34.848629  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:34.848633  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:34.848815  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:35.345413  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:35.345441  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:35.345451  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:35.345455  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:35.347957  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:35.347979  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:35.347986  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:35 GMT
	I0923 12:42:35.347990  533789 round_trippers.go:580]     Audit-Id: 8e2c9e4f-9144-4942-b77f-f931bdcb1b0c
	I0923 12:42:35.347993  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:35.347996  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:35.347998  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:35.348002  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:35.348190  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:35.845960  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:35.845987  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:35.845997  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:35.846005  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:35.848746  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:35.848773  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:35.848781  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:35 GMT
	I0923 12:42:35.848784  533789 round_trippers.go:580]     Audit-Id: 129b4e27-1230-4dcc-a02d-b30281d601b5
	I0923 12:42:35.848788  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:35.848793  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:35.848798  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:35.848801  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:35.848911  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:35.849332  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:36.345569  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:36.345594  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:36.345603  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:36.345606  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:36.347940  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:36.347963  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:36.347971  533789 round_trippers.go:580]     Audit-Id: d0b5166a-daa4-49f2-bb89-9a6c93c0eb54
	I0923 12:42:36.347976  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:36.347979  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:36.347983  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:36.347988  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:36.347993  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:36 GMT
	I0923 12:42:36.348189  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:36.845990  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:36.846019  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:36.846030  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:36.846036  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:36.848554  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:36.848584  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:36.848595  533789 round_trippers.go:580]     Audit-Id: 880b9977-2751-4a16-a427-95781a5a9d2a
	I0923 12:42:36.848602  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:36.848609  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:36.848614  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:36.848621  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:36.848626  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:36 GMT
	I0923 12:42:36.848803  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:37.345521  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:37.345551  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:37.345564  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:37.345571  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:37.349083  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:37.349109  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:37.349120  533789 round_trippers.go:580]     Audit-Id: bbd402d6-2fbd-4f55-a9bb-9f7ed53bbe83
	I0923 12:42:37.349125  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:37.349130  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:37.349143  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:37.349147  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:37.349151  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:37 GMT
	I0923 12:42:37.349560  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:37.846374  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:37.846404  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:37.846415  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:37.846421  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:37.849117  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:37.849143  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:37.849152  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:37.849159  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:37 GMT
	I0923 12:42:37.849165  533789 round_trippers.go:580]     Audit-Id: 2f65ae00-7414-4ae2-bf59-50f5a527d982
	I0923 12:42:37.849170  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:37.849177  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:37.849183  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:37.849381  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:37.849737  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:38.346104  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:38.346130  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:38.346139  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:38.346143  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:38.348844  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:38.348868  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:38.348877  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:38 GMT
	I0923 12:42:38.348883  533789 round_trippers.go:580]     Audit-Id: 5663a840-a837-4c5c-8f4f-ffe23669a270
	I0923 12:42:38.348887  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:38.348890  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:38.348894  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:38.348899  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:38.351976  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:38.845652  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:38.845681  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:38.845689  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:38.845693  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:38.849190  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:38.849216  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:38.849227  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:38.849234  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:38.849240  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:38.849245  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:38.849250  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:38 GMT
	I0923 12:42:38.849255  533789 round_trippers.go:580]     Audit-Id: 977ebdba-e0b4-4fb3-9f69-497f2b1fcc91
	I0923 12:42:38.849474  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:39.345821  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:39.345845  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:39.345854  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:39.345858  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:39.348246  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:39.348270  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:39.348281  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:39 GMT
	I0923 12:42:39.348286  533789 round_trippers.go:580]     Audit-Id: 973d49e2-0ecd-411a-a3f6-c89eef280549
	I0923 12:42:39.348291  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:39.348295  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:39.348299  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:39.348305  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:39.348460  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:39.846176  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:39.846206  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:39.846217  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:39.846225  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:39.849200  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:39.849221  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:39.849227  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:39.849231  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:39.849235  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:39.849238  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:39.849241  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:39 GMT
	I0923 12:42:39.849245  533789 round_trippers.go:580]     Audit-Id: e4061dfe-96b6-4179-9b34-79c60f601159
	I0923 12:42:39.849691  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:39.850073  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:40.345373  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:40.345403  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:40.345412  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:40.345415  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:40.348018  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:40.348050  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:40.348062  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:40.348068  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:40.348072  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:40 GMT
	I0923 12:42:40.348075  533789 round_trippers.go:580]     Audit-Id: 353ab718-d363-41f9-90c6-05ff368188b7
	I0923 12:42:40.348079  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:40.348081  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:40.348181  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:40.845931  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:40.845962  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:40.845971  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:40.845976  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:40.848307  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:40.848335  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:40.848345  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:40.848359  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:40 GMT
	I0923 12:42:40.848363  533789 round_trippers.go:580]     Audit-Id: 91d2798d-a6ee-485a-ab77-64c4798833b9
	I0923 12:42:40.848366  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:40.848369  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:40.848372  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:40.848526  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:41.345456  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:41.345483  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:41.345493  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:41.345498  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:41.347813  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:41.347835  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:41.347843  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:41.347847  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:41.347853  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:41.347858  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:41.347861  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:41 GMT
	I0923 12:42:41.347865  533789 round_trippers.go:580]     Audit-Id: 2a15f6d0-c4b5-426b-ba51-f3aaa971bf91
	I0923 12:42:41.348060  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:41.845676  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:41.845709  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:41.845721  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:41.845728  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:41.848158  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:41.848180  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:41.848187  533789 round_trippers.go:580]     Audit-Id: 2e1634af-3370-48f7-b768-508399198075
	I0923 12:42:41.848192  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:41.848194  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:41.848198  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:41.848201  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:41.848203  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:41 GMT
	I0923 12:42:41.848404  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:42.346222  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:42.346254  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:42.346268  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:42.346274  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:42.348767  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:42.348789  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:42.348798  533789 round_trippers.go:580]     Audit-Id: 2870230b-9820-4fea-bd38-eeb978a0174a
	I0923 12:42:42.348806  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:42.348811  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:42.348815  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:42.348819  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:42.348823  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:42 GMT
	I0923 12:42:42.348992  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:42.349341  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:42.845709  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:42.845742  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:42.845752  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:42.845758  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:42.848320  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:42.848348  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:42.848358  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:42 GMT
	I0923 12:42:42.848377  533789 round_trippers.go:580]     Audit-Id: aa44d8b9-4083-4d27-b789-9178a74c702e
	I0923 12:42:42.848385  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:42.848389  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:42.848393  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:42.848397  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:42.848546  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:43.346256  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:43.346282  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:43.346291  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:43.346296  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:43.348747  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:43.348766  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:43.348773  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:43.348777  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:43.348780  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:43 GMT
	I0923 12:42:43.348783  533789 round_trippers.go:580]     Audit-Id: 3a5a6c98-a46b-4381-aa31-42b9eec54e0d
	I0923 12:42:43.348786  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:43.348788  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:43.348961  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:43.845658  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:43.845687  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:43.845696  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:43.845700  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:43.848225  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:43.848258  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:43.848266  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:43.848271  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:43.848277  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:43 GMT
	I0923 12:42:43.848281  533789 round_trippers.go:580]     Audit-Id: 4543d16a-ea4f-4307-b16f-883cca358308
	I0923 12:42:43.848285  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:43.848288  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:43.848768  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:44.345656  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:44.345680  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:44.345693  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:44.345704  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:44.348077  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:44.348100  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:44.348108  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:44 GMT
	I0923 12:42:44.348111  533789 round_trippers.go:580]     Audit-Id: a249a65e-8e60-4145-85b5-19a507067926
	I0923 12:42:44.348114  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:44.348117  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:44.348119  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:44.348122  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:44.348354  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:44.846167  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:44.846194  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:44.846203  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:44.846207  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:44.848818  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:44.848840  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:44.848847  533789 round_trippers.go:580]     Audit-Id: ade54a33-09c8-4d3d-8198-e544268bd0c5
	I0923 12:42:44.848852  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:44.848855  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:44.848858  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:44.848860  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:44.848864  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:44 GMT
	I0923 12:42:44.849005  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:44.849340  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:45.345745  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:45.345771  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:45.345780  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:45.345785  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:45.348404  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:45.348428  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:45.348435  533789 round_trippers.go:580]     Audit-Id: 5c81f3bd-ff1e-48ed-ad20-8993029e9d8f
	I0923 12:42:45.348440  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:45.348442  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:45.348447  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:45.348450  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:45.348453  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:45 GMT
	I0923 12:42:45.348562  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:45.846363  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:45.846391  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:45.846401  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:45.846405  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:45.849149  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:45.849177  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:45.849188  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:45.849193  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:45.849199  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:45 GMT
	I0923 12:42:45.849205  533789 round_trippers.go:580]     Audit-Id: e1e28a4e-06f0-4386-98d6-569480ebea3d
	I0923 12:42:45.849212  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:45.849214  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:45.849327  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:46.346017  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:46.346049  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:46.346060  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:46.346063  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:46.348787  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:46.348820  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:46.348832  533789 round_trippers.go:580]     Audit-Id: 28c35328-0dee-4ded-b643-b76f8fd0a1c6
	I0923 12:42:46.348837  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:46.348840  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:46.348842  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:46.348845  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:46.348848  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:46 GMT
	I0923 12:42:46.348946  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:46.845517  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:46.845546  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:46.845555  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:46.845561  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:46.848142  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:46.848167  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:46.848178  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:46.848185  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:46.848191  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:46 GMT
	I0923 12:42:46.848195  533789 round_trippers.go:580]     Audit-Id: a4d3c825-3feb-4389-b42c-c699ed61fb36
	I0923 12:42:46.848199  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:46.848203  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:46.848457  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:47.346242  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:47.346275  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:47.346293  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:47.346300  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:47.349052  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:47.349073  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:47.349081  533789 round_trippers.go:580]     Audit-Id: 1251048c-3dca-4fa5-8065-ecb4ecdc88bd
	I0923 12:42:47.349086  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:47.349088  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:47.349091  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:47.349095  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:47.349097  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:47 GMT
	I0923 12:42:47.349309  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:47.349793  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:47.846186  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:47.846218  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:47.846228  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:47.846232  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:47.848860  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:47.848883  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:47.848890  533789 round_trippers.go:580]     Audit-Id: 0b2affeb-4962-4868-892e-b53c00db2093
	I0923 12:42:47.848895  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:47.848899  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:47.848904  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:47.848907  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:47.848912  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:47 GMT
	I0923 12:42:47.849148  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:47.849480  533789 node_ready.go:49] node "multinode-915704" has status "Ready":"True"
	I0923 12:42:47.849498  533789 node_ready.go:38] duration metric: took 16.504378231s for node "multinode-915704" to be "Ready" ...
	I0923 12:42:47.849507  533789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:42:47.849568  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:42:47.849577  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:47.849585  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:47.849588  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:47.852992  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:47.853010  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:47.853020  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:47 GMT
	I0923 12:42:47.853029  533789 round_trippers.go:580]     Audit-Id: 63d0725f-5e8d-4b9c-bdba-9733a14e588d
	I0923 12:42:47.853032  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:47.853035  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:47.853037  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:47.853039  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:47.854125  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1281"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89102 chars]
	I0923 12:42:47.858242  533789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:47.858353  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:47.858365  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:47.858376  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:47.858381  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:47.861220  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:47.861245  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:47.861252  533789 round_trippers.go:580]     Audit-Id: dd5fb172-197e-4ac8-8e62-79b13c141167
	I0923 12:42:47.861258  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:47.861263  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:47.861269  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:47.861273  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:47.861277  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:47 GMT
	I0923 12:42:47.861396  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:47.862155  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:47.862180  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:47.862192  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:47.862197  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:47.864451  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:47.864468  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:47.864475  533789 round_trippers.go:580]     Audit-Id: c461204b-de54-49d9-b5da-49d37d79d44c
	I0923 12:42:47.864482  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:47.864485  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:47.864488  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:47.864492  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:47.864495  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:47 GMT
	I0923 12:42:47.864961  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:48.358714  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:48.358744  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:48.358766  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:48.358772  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:48.361398  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:48.361419  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:48.361426  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:48.361429  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:48.361432  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:48 GMT
	I0923 12:42:48.361435  533789 round_trippers.go:580]     Audit-Id: b8e8aa48-a4c0-4eb9-9c89-1f10898574d0
	I0923 12:42:48.361437  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:48.361440  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:48.361740  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:48.362221  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:48.362234  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:48.362241  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:48.362246  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:48.364124  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:48.364140  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:48.364148  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:48.364152  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:48 GMT
	I0923 12:42:48.364156  533789 round_trippers.go:580]     Audit-Id: b1360395-bad1-482d-887e-393422956d9b
	I0923 12:42:48.364160  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:48.364162  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:48.364165  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:48.364328  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:48.858884  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:48.858913  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:48.858937  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:48.858944  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:48.861881  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:48.861906  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:48.861914  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:48.861918  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:48.861921  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:48.861925  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:48 GMT
	I0923 12:42:48.861928  533789 round_trippers.go:580]     Audit-Id: e8abfb05-a660-48a8-838e-22bf30575ab7
	I0923 12:42:48.861931  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:48.862092  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:48.862644  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:48.862659  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:48.862667  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:48.862673  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:48.865540  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:48.865563  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:48.865574  533789 round_trippers.go:580]     Audit-Id: 17d67886-e491-4b30-8143-dbeb1ec0e10d
	I0923 12:42:48.865581  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:48.865587  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:48.865591  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:48.865595  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:48.865599  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:48 GMT
	I0923 12:42:48.865874  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:49.359480  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:49.359507  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:49.359516  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:49.359520  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:49.362475  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:49.362497  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:49.362504  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:49 GMT
	I0923 12:42:49.362507  533789 round_trippers.go:580]     Audit-Id: 5752faca-a972-4738-b597-f68403a3b327
	I0923 12:42:49.362509  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:49.362513  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:49.362515  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:49.362518  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:49.362823  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:49.363315  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:49.363329  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:49.363336  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:49.363340  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:49.365345  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:49.365365  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:49.365372  533789 round_trippers.go:580]     Audit-Id: 2ce213bb-9129-48e1-8966-b1aff6422975
	I0923 12:42:49.365375  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:49.365382  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:49.365387  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:49.365392  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:49.365398  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:49 GMT
	I0923 12:42:49.365666  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:49.859510  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:49.859546  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:49.859558  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:49.859563  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:49.863075  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:49.863126  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:49.863140  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:49 GMT
	I0923 12:42:49.863147  533789 round_trippers.go:580]     Audit-Id: 517b09c4-c448-4d5c-90e0-a05b26a822e4
	I0923 12:42:49.863152  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:49.863157  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:49.863161  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:49.863165  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:49.863286  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:49.863768  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:49.863787  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:49.863794  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:49.863799  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:49.866740  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:49.866797  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:49.866810  533789 round_trippers.go:580]     Audit-Id: a76cd2a9-d702-4c69-a702-d88fe5591c96
	I0923 12:42:49.866815  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:49.866821  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:49.866826  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:49.866832  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:49.866835  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:49 GMT
	I0923 12:42:49.866960  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:49.867450  533789 pod_ready.go:103] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:50.358476  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:50.358509  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:50.358518  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:50.358523  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:50.361453  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:50.361482  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:50.361492  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:50.361497  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:50.361501  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:50.361504  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:50 GMT
	I0923 12:42:50.361510  533789 round_trippers.go:580]     Audit-Id: 5c14503a-adf1-4215-bafc-9c647e623b80
	I0923 12:42:50.361513  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:50.361806  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:50.362336  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:50.362354  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:50.362362  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:50.362367  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:50.364942  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:50.364967  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:50.364975  533789 round_trippers.go:580]     Audit-Id: 0e075bd0-fb3f-4aeb-852e-5dbde425e0c2
	I0923 12:42:50.364978  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:50.364981  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:50.364984  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:50.364988  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:50.364991  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:50 GMT
	I0923 12:42:50.365115  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:50.858674  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:50.858706  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:50.858715  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:50.858719  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:50.861780  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:50.861804  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:50.861811  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:50 GMT
	I0923 12:42:50.861816  533789 round_trippers.go:580]     Audit-Id: 2ee2ff32-2342-4708-b77a-64873d05489a
	I0923 12:42:50.861820  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:50.861825  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:50.861829  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:50.861836  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:50.862003  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:50.862692  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:50.862717  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:50.862729  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:50.862735  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:50.865347  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:50.865424  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:50.865433  533789 round_trippers.go:580]     Audit-Id: 3bdfa622-9826-42a7-bb99-aa0c61e43467
	I0923 12:42:50.865437  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:50.865440  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:50.865443  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:50.865446  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:50.865449  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:50 GMT
	I0923 12:42:50.865569  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:51.359322  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:51.359350  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:51.359360  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:51.359364  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:51.361898  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:51.361925  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:51.361936  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:51.361944  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:51.361983  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:51.362018  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:51 GMT
	I0923 12:42:51.362033  533789 round_trippers.go:580]     Audit-Id: 4be1a5b4-9a7a-4004-ac57-5a3be86994cf
	I0923 12:42:51.362055  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:51.362193  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:51.362849  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:51.362870  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:51.362880  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:51.362883  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:51.364896  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:51.364916  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:51.364926  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:51.364932  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:51.364940  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:51.364945  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:51 GMT
	I0923 12:42:51.364955  533789 round_trippers.go:580]     Audit-Id: 5c47e8dc-63b9-4436-b249-783590731a56
	I0923 12:42:51.364959  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:51.365116  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:51.858746  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:51.858789  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:51.858800  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:51.858803  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:51.861503  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:51.861529  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:51.861537  533789 round_trippers.go:580]     Audit-Id: e46fb3b0-5a9d-4895-959b-6467edaf130f
	I0923 12:42:51.861544  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:51.861547  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:51.861550  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:51.861554  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:51.861557  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:51 GMT
	I0923 12:42:51.861781  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:51.862451  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:51.862474  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:51.862483  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:51.862492  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:51.864930  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:51.864959  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:51.864970  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:51.864976  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:51.864980  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:51 GMT
	I0923 12:42:51.864985  533789 round_trippers.go:580]     Audit-Id: d39f591c-d033-4db7-a4e1-6853fba76aa3
	I0923 12:42:51.864989  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:51.864993  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:51.865169  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:52.358617  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:52.358648  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:52.358657  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:52.358661  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:52.361539  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:52.361569  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:52.361578  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:52 GMT
	I0923 12:42:52.361584  533789 round_trippers.go:580]     Audit-Id: addb16e9-1ac5-4bd7-b22e-3aed5ef9ac1b
	I0923 12:42:52.361588  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:52.361592  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:52.361596  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:52.361599  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:52.361761  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:52.362270  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:52.362289  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:52.362299  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:52.362303  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:52.364209  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:52.364228  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:52.364237  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:52.364244  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:52.364249  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:52.364253  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:52 GMT
	I0923 12:42:52.364258  533789 round_trippers.go:580]     Audit-Id: 3f58f500-acab-4f0e-963b-3a6c3769c878
	I0923 12:42:52.364264  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:52.364402  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:52.364765  533789 pod_ready.go:103] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:52.859217  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:52.859245  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:52.859254  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:52.859256  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:52.862372  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:52.862402  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:52.862410  533789 round_trippers.go:580]     Audit-Id: b4acbeb1-87c0-4786-8a49-989f8bd85643
	I0923 12:42:52.862412  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:52.862416  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:52.862418  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:52.862422  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:52.862425  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:52 GMT
	I0923 12:42:52.862708  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:52.863277  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:52.863295  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:52.863304  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:52.863309  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:52.865778  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:52.865799  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:52.865809  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:52.865814  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:52.865818  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:52 GMT
	I0923 12:42:52.865822  533789 round_trippers.go:580]     Audit-Id: 400e8366-573f-454a-9d9c-660215870999
	I0923 12:42:52.865829  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:52.865836  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:52.865965  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:53.358680  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:53.358709  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:53.358722  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:53.358728  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:53.361387  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:53.361423  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:53.361432  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:53.361437  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:53.361439  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:53.361444  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:53.361446  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:53 GMT
	I0923 12:42:53.361449  533789 round_trippers.go:580]     Audit-Id: 74e566ce-970d-4416-b2c0-9554a22c100c
	I0923 12:42:53.361561  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:53.362173  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:53.362191  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:53.362202  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:53.362210  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:53.364267  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:53.364299  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:53.364309  533789 round_trippers.go:580]     Audit-Id: e8e7517b-1795-4325-8e38-0f7f220edf3a
	I0923 12:42:53.364313  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:53.364319  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:53.364325  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:53.364329  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:53.364335  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:53 GMT
	I0923 12:42:53.364443  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:53.859031  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:53.859069  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:53.859078  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:53.859083  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:53.862170  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:53.862206  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:53.862217  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:53.862224  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:53.862228  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:53.862233  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:53 GMT
	I0923 12:42:53.862237  533789 round_trippers.go:580]     Audit-Id: a0c32f1b-eac3-4879-b92e-a39a9851b5cb
	I0923 12:42:53.862240  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:53.862394  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:53.862970  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:53.862987  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:53.862994  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:53.862997  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:53.865449  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:53.865474  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:53.865484  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:53 GMT
	I0923 12:42:53.865489  533789 round_trippers.go:580]     Audit-Id: e8056dba-2b30-4c00-9e8a-ec331812b0ef
	I0923 12:42:53.865493  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:53.865497  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:53.865502  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:53.865507  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:53.865644  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:54.358463  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:54.358495  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:54.358505  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:54.358510  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:54.361584  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:54.361615  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:54.361624  533789 round_trippers.go:580]     Audit-Id: 022e04c9-3fda-4e24-a063-7704c07a18c9
	I0923 12:42:54.361631  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:54.361636  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:54.361639  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:54.361643  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:54.361647  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:54 GMT
	I0923 12:42:54.361819  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:54.362318  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:54.362331  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:54.362339  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:54.362344  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:54.364578  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:54.364603  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:54.364613  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:54.364618  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:54.364623  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:54.364627  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:54 GMT
	I0923 12:42:54.364633  533789 round_trippers.go:580]     Audit-Id: f526cc17-b96e-4630-a351-40e2840d1ef0
	I0923 12:42:54.364637  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:54.364732  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:54.365049  533789 pod_ready.go:103] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:54.859580  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:54.859611  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:54.859622  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:54.859626  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:54.862686  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:54.862719  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:54.862731  533789 round_trippers.go:580]     Audit-Id: c4973626-f469-42d3-a47f-b1f380d694ae
	I0923 12:42:54.862739  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:54.862744  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:54.862764  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:54.862771  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:54.862782  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:54 GMT
	I0923 12:42:54.862917  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:54.863539  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:54.863566  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:54.863576  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:54.863581  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:54.866075  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:54.866094  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:54.866102  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:54.866105  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:54.866108  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:54.866114  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:54.866120  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:54 GMT
	I0923 12:42:54.866124  533789 round_trippers.go:580]     Audit-Id: 41e2aa3e-7e14-48b7-997c-e7c889d81c96
	I0923 12:42:54.866292  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:55.358997  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:55.359090  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:55.359114  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:55.359123  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:55.361846  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:55.361870  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:55.361879  533789 round_trippers.go:580]     Audit-Id: 69f2f4b4-a81c-4e33-8510-e79eea3dd95f
	I0923 12:42:55.361883  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:55.361889  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:55.361894  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:55.361898  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:55.361901  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:55 GMT
	I0923 12:42:55.362009  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:55.362559  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:55.362583  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:55.362593  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:55.362599  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:55.364674  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:55.364696  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:55.364706  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:55.364710  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:55 GMT
	I0923 12:42:55.364715  533789 round_trippers.go:580]     Audit-Id: 1c4fc584-b6b6-4215-a0b0-fa9407bdd95a
	I0923 12:42:55.364720  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:55.364724  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:55.364729  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:55.364893  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:55.858584  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:55.858617  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:55.858629  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:55.858635  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:55.861600  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:55.861637  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:55.861647  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:55.861653  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:55.861658  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:55 GMT
	I0923 12:42:55.861662  533789 round_trippers.go:580]     Audit-Id: 1cbd8f7a-a889-4da7-bbbc-ac1308b92d0a
	I0923 12:42:55.861666  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:55.861669  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:55.861772  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:55.862269  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:55.862288  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:55.862298  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:55.862304  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:55.865086  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:55.865126  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:55.865137  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:55.865145  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:55.865149  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:55 GMT
	I0923 12:42:55.865154  533789 round_trippers.go:580]     Audit-Id: c515cbe0-8d9a-4453-92a9-699e71adc035
	I0923 12:42:55.865158  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:55.865162  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:55.865775  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:56.358485  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:56.358514  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:56.358523  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:56.358527  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:56.361025  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:56.361048  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:56.361059  533789 round_trippers.go:580]     Audit-Id: 6b99741b-1d23-4ebb-a13e-a4281bf08d1d
	I0923 12:42:56.361064  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:56.361069  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:56.361072  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:56.361078  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:56.361082  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:56 GMT
	I0923 12:42:56.361197  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:56.361909  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:56.361932  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:56.361943  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:56.361953  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:56.364012  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:56.364033  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:56.364042  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:56.364046  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:56 GMT
	I0923 12:42:56.364050  533789 round_trippers.go:580]     Audit-Id: 8d420305-f73a-456b-886e-9a77d0330977
	I0923 12:42:56.364053  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:56.364057  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:56.364060  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:56.364178  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:56.858914  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:56.858960  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:56.858970  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:56.858976  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:56.862645  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:56.862686  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:56.862694  533789 round_trippers.go:580]     Audit-Id: f327fd06-9db5-4b18-ad63-9413adf2d158
	I0923 12:42:56.862698  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:56.862703  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:56.862709  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:56.862714  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:56.862718  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:56 GMT
	I0923 12:42:56.862864  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:56.863635  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:56.863661  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:56.863673  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:56.863678  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:56.866843  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:56.866867  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:56.866874  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:56.866878  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:56.866880  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:56.866883  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:56.866885  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:56 GMT
	I0923 12:42:56.866889  533789 round_trippers.go:580]     Audit-Id: dddc2e2f-6380-4204-a00c-0250b1fa912e
	I0923 12:42:56.867030  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:56.867391  533789 pod_ready.go:103] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:57.358587  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:57.358620  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:57.358633  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:57.358638  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:57.361979  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:57.362004  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:57.362011  533789 round_trippers.go:580]     Audit-Id: 5e1dc853-5556-4205-8029-c6b854ff1c95
	I0923 12:42:57.362017  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:57.362025  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:57.362029  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:57.362035  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:57.362039  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:57 GMT
	I0923 12:42:57.362151  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:57.362656  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:57.362672  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:57.362679  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:57.362682  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:57.365086  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:57.365106  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:57.365112  533789 round_trippers.go:580]     Audit-Id: 17f2bb0c-e1a2-4df0-8489-8b696d95edc4
	I0923 12:42:57.365116  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:57.365119  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:57.365121  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:57.365132  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:57.365135  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:57 GMT
	I0923 12:42:57.365292  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:57.859016  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:57.859060  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:57.859077  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:57.859083  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:57.862178  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:57.862211  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:57.862224  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:57 GMT
	I0923 12:42:57.862231  533789 round_trippers.go:580]     Audit-Id: 73e92740-4197-438f-9eea-e8718ee41904
	I0923 12:42:57.862237  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:57.862241  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:57.862245  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:57.862249  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:57.862470  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:57.863262  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:57.863285  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:57.863297  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:57.863303  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:57.865934  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:57.865978  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:57.865988  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:57.865993  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:57.865999  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:57.866004  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:57.866008  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:57 GMT
	I0923 12:42:57.866012  533789 round_trippers.go:580]     Audit-Id: 9e7996ab-cf08-42fa-ba16-243b61fbca59
	I0923 12:42:57.866252  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:58.358963  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:58.358990  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:58.359000  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:58.359004  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:58.361393  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:58.361415  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:58.361422  533789 round_trippers.go:580]     Audit-Id: 972c9dcc-9096-461a-b43b-453c7f52268e
	I0923 12:42:58.361426  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:58.361429  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:58.361435  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:58.361439  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:58.361443  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:58 GMT
	I0923 12:42:58.361589  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:58.362067  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:58.362081  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:58.362089  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:58.362092  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:58.364133  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:58.364154  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:58.364160  533789 round_trippers.go:580]     Audit-Id: 083923c7-8de0-474a-a235-5bf9ee25e823
	I0923 12:42:58.364165  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:58.364169  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:58.364173  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:58.364177  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:58.364183  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:58 GMT
	I0923 12:42:58.364312  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:58.859330  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:58.859366  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:58.859378  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:58.859385  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:58.862391  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:58.862419  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:58.862426  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:58.862430  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:58.862432  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:58.862438  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:58 GMT
	I0923 12:42:58.862440  533789 round_trippers.go:580]     Audit-Id: a252f1f3-c250-43fe-85d3-612fd6c2aec4
	I0923 12:42:58.862443  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:58.862536  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:58.863073  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:58.863090  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:58.863098  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:58.863102  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:58.865634  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:58.865660  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:58.865668  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:58 GMT
	I0923 12:42:58.865672  533789 round_trippers.go:580]     Audit-Id: b2e12cc4-4fd8-437c-aec8-1b0ee354575d
	I0923 12:42:58.865676  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:58.865679  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:58.865683  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:58.865686  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:58.865792  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:59.359211  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:59.359236  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:59.359247  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:59.359251  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:59.361954  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:59.361985  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:59.361996  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:59.362002  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:59 GMT
	I0923 12:42:59.362007  533789 round_trippers.go:580]     Audit-Id: bbfcfd8d-1716-4d1a-a432-4863be3c448b
	I0923 12:42:59.362011  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:59.362015  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:59.362019  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:59.362136  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:59.362843  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:59.362866  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:59.362876  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:59.362881  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:59.365025  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:59.365052  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:59.365059  533789 round_trippers.go:580]     Audit-Id: ce99cd9e-210c-4384-8f43-b5684bcc90ae
	I0923 12:42:59.365062  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:59.365064  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:59.365067  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:59.365070  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:59.365073  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:59 GMT
	I0923 12:42:59.365237  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:59.365611  533789 pod_ready.go:103] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:59.858962  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:59.859016  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:59.859028  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:59.859034  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:59.862797  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:59.862828  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:59.862836  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:59.862840  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:59.862843  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:59.862846  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:59.862849  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:59 GMT
	I0923 12:42:59.862852  533789 round_trippers.go:580]     Audit-Id: b8bc7add-760d-4b30-8ca8-0d36c8b8d6c6
	I0923 12:42:59.863141  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:59.863962  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:59.863988  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:59.864000  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:59.864006  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:59.866303  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:59.866324  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:59.866332  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:59 GMT
	I0923 12:42:59.866335  533789 round_trippers.go:580]     Audit-Id: b769469c-1b61-438a-bbaf-3036263b4060
	I0923 12:42:59.866338  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:59.866340  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:59.866342  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:59.866345  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:59.866704  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.359516  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:43:00.359548  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.359557  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.359563  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.362918  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:00.362955  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.362967  533789 round_trippers.go:580]     Audit-Id: e465022c-6447-4238-80ba-dd70eccea18a
	I0923 12:43:00.362972  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.362977  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.362981  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.362984  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.362994  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.363166  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:43:00.363753  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:00.363773  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.363781  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.363787  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.365844  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.365864  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.365873  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.365880  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.365883  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.365887  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.365891  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.365895  533789 round_trippers.go:580]     Audit-Id: 889d2b43-c303-417c-9014-3ad2267a8a46
	I0923 12:43:00.366167  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.858860  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:43:00.858896  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.858908  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.858914  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.873328  533789 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 12:43:00.873358  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.873367  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.873370  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.873374  533789 round_trippers.go:580]     Audit-Id: a94ccf8a-fc72-4665-9f8a-ff6df4f4c0c8
	I0923 12:43:00.873377  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.873380  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.873382  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.873623  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1308","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I0923 12:43:00.874328  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:00.874355  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.874366  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.874372  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.877696  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:00.877718  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.877726  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.877730  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.877735  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.877738  533789 round_trippers.go:580]     Audit-Id: 0e0d5eb5-d612-4e98-aa6c-4576a4fb2b5b
	I0923 12:43:00.877742  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.877745  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.878069  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.878399  533789 pod_ready.go:93] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:00.878418  533789 pod_ready.go:82] duration metric: took 13.020140719s for pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.878429  533789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.878492  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-915704
	I0923 12:43:00.878501  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.878509  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.878515  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.881427  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.881448  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.881456  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.881460  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.881462  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.881465  533789 round_trippers.go:580]     Audit-Id: 5bc11e2e-3efb-4d69-94b0-a566431b0793
	I0923 12:43:00.881467  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.881471  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.881889  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-915704","namespace":"kube-system","uid":"298e300f-3a4d-4d3c-803d-d4aa5e369e92","resourceVersion":"1271","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.233:2379","kubernetes.io/config.hash":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.mirror":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.seen":"2024-09-23T12:35:14.769599942Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6686 chars]
	I0923 12:43:00.882356  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:00.882372  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.882379  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.882383  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.884657  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.884675  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.884682  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.884686  533789 round_trippers.go:580]     Audit-Id: f652c027-c4d2-4fa8-b2fc-9ccbef3aff69
	I0923 12:43:00.884689  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.884693  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.884696  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.884700  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.884952  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.885312  533789 pod_ready.go:93] pod "etcd-multinode-915704" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:00.885335  533789 pod_ready.go:82] duration metric: took 6.90041ms for pod "etcd-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.885353  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.885415  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-915704
	I0923 12:43:00.885425  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.885432  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.885436  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.887961  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.887978  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.887985  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.887990  533789 round_trippers.go:580]     Audit-Id: be9a05f8-cbea-42de-b71e-6d4baa7fdd17
	I0923 12:43:00.887994  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.887997  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.888001  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.888004  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.888567  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-915704","namespace":"kube-system","uid":"2c5266db-b2d2-41ac-8bf7-eda1b883d3e3","resourceVersion":"1275","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.233:8443","kubernetes.io/config.hash":"3115e5dacc8088b6f9144058d3597214","kubernetes.io/config.mirror":"3115e5dacc8088b6f9144058d3597214","kubernetes.io/config.seen":"2024-09-23T12:35:14.769595152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7912 chars]
	I0923 12:43:00.889027  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:00.889044  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.889052  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.889056  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.892045  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.892069  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.892078  533789 round_trippers.go:580]     Audit-Id: 55add694-b8a5-4731-9e36-2398ab87935f
	I0923 12:43:00.892085  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.892090  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.892093  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.892097  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.892104  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.892430  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.892747  533789 pod_ready.go:93] pod "kube-apiserver-multinode-915704" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:00.892765  533789 pod_ready.go:82] duration metric: took 7.405884ms for pod "kube-apiserver-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.892775  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.892843  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-915704
	I0923 12:43:00.892852  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.892858  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.892862  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.895225  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.895250  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.895259  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.895265  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.895270  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.895276  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.895280  533789 round_trippers.go:580]     Audit-Id: 124dc789-9ae6-4be3-a42d-5f2086fc8ab1
	I0923 12:43:00.895284  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.895700  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-915704","namespace":"kube-system","uid":"b95455eb-960c-44bf-9c6d-b39459f4c498","resourceVersion":"1269","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02fde30fd2ad3cda5e3cacafb6edf88d","kubernetes.io/config.mirror":"02fde30fd2ad3cda5e3cacafb6edf88d","kubernetes.io/config.seen":"2024-09-23T12:35:14.769598186Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0923 12:43:00.896244  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:00.896261  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.896268  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.896273  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.898509  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.898526  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.898533  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.898537  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.898540  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.898544  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.898547  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.898549  533789 round_trippers.go:580]     Audit-Id: ad60dbc4-2b77-4850-8df6-aa97825a3417
	I0923 12:43:00.898731  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.899139  533789 pod_ready.go:93] pod "kube-controller-manager-multinode-915704" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:00.899160  533789 pod_ready.go:82] duration metric: took 6.379243ms for pod "kube-controller-manager-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.899174  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hgdzz" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.899237  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgdzz
	I0923 12:43:00.899246  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.899253  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.899258  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.901391  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.901408  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.901416  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.901422  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.901428  533789 round_trippers.go:580]     Audit-Id: d920c162-66b0-4c39-a075-7f775388b87f
	I0923 12:43:00.901432  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.901436  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.901441  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.901613  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgdzz","generateName":"kube-proxy-","namespace":"kube-system","uid":"c9ae5011-0233-4713-83c0-5bbc9829abf9","resourceVersion":"991","creationTimestamp":"2024-09-23T12:36:10Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:36:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6207 chars]
	I0923 12:43:00.902132  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m02
	I0923 12:43:00.902149  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.902156  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.902163  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.904984  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.904999  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.905005  533789 round_trippers.go:580]     Audit-Id: 327e94df-3ddd-46d0-b387-f7ebf57e13a1
	I0923 12:43:00.905010  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.905013  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.905016  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.905018  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.905021  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.905353  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704-m02","uid":"aee80d3c-b81a-428e-9a4a-6e531d5a77ec","resourceVersion":"1015","creationTimestamp":"2024-09-23T12:40:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T12_40_23_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:40:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3814 chars]
	I0923 12:43:00.905612  533789 pod_ready.go:93] pod "kube-proxy-hgdzz" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:00.905627  533789 pod_ready.go:82] duration metric: took 6.447485ms for pod "kube-proxy-hgdzz" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.905637  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jthg2" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:01.059017  533789 request.go:632] Waited for 153.306667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jthg2
	I0923 12:43:01.059121  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jthg2
	I0923 12:43:01.059127  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:01.059135  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:01.059147  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:01.062452  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:01.062495  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:01.062506  533789 round_trippers.go:580]     Audit-Id: cfa02e85-8ed1-486f-8cb7-fdb1eed0a0a5
	I0923 12:43:01.062513  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:01.062516  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:01.062520  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:01.062523  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:01.062527  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:01 GMT
	I0923 12:43:01.062656  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jthg2","generateName":"kube-proxy-","namespace":"kube-system","uid":"5bb0bd6f-f3b5-4750-bf9c-dd24b581e10f","resourceVersion":"1090","creationTimestamp":"2024-09-23T12:37:12Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6203 chars]
	I0923 12:43:01.258879  533789 request.go:632] Waited for 195.71768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m03
	I0923 12:43:01.258958  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m03
	I0923 12:43:01.258965  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:01.258975  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:01.258991  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:01.262100  533789 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0923 12:43:01.262137  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:01.262150  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:01.262157  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:01.262165  533789 round_trippers.go:580]     Content-Length: 210
	I0923 12:43:01.262170  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:01 GMT
	I0923 12:43:01.262174  533789 round_trippers.go:580]     Audit-Id: 13d10adf-c887-40d0-bfbb-8ffd80c71fed
	I0923 12:43:01.262181  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:01.262187  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:01.262215  533789 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-915704-m03\" not found","reason":"NotFound","details":{"name":"multinode-915704-m03","kind":"nodes"},"code":404}
	I0923 12:43:01.262359  533789 pod_ready.go:98] node "multinode-915704-m03" hosting pod "kube-proxy-jthg2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-915704-m03": nodes "multinode-915704-m03" not found
	I0923 12:43:01.262380  533789 pod_ready.go:82] duration metric: took 356.736632ms for pod "kube-proxy-jthg2" in "kube-system" namespace to be "Ready" ...
	E0923 12:43:01.262389  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704-m03" hosting pod "kube-proxy-jthg2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-915704-m03": nodes "multinode-915704-m03" not found
	I0923 12:43:01.262396  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rmgjt" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:01.459785  533789 request.go:632] Waited for 197.303091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rmgjt
	I0923 12:43:01.459883  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rmgjt
	I0923 12:43:01.459890  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:01.459902  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:01.459909  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:01.462690  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:01.462721  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:01.462732  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:01.462737  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:01.462742  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:01.462747  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:01 GMT
	I0923 12:43:01.462774  533789 round_trippers.go:580]     Audit-Id: 04254f78-ae7e-4e6f-a12a-3b25d5037f2e
	I0923 12:43:01.462779  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:01.462980  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rmgjt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5d86b98-706f-411f-8209-017ecf7d533f","resourceVersion":"1251","creationTimestamp":"2024-09-23T12:35:19Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0923 12:43:01.658954  533789 request.go:632] Waited for 195.37659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:01.659065  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:01.659074  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:01.659085  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:01.659092  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:01.661815  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:01.661846  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:01.661859  533789 round_trippers.go:580]     Audit-Id: d98ce006-f5cb-48f5-a2d0-94f487bd5498
	I0923 12:43:01.661863  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:01.661867  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:01.661874  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:01.661878  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:01.661883  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:01 GMT
	I0923 12:43:01.662020  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:01.662497  533789 pod_ready.go:93] pod "kube-proxy-rmgjt" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:01.662530  533789 pod_ready.go:82] duration metric: took 400.123073ms for pod "kube-proxy-rmgjt" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:01.662545  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:01.859419  533789 request.go:632] Waited for 196.788931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-915704
	I0923 12:43:01.859506  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-915704
	I0923 12:43:01.859511  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:01.859533  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:01.859539  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:01.862495  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:01.862526  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:01.862536  533789 round_trippers.go:580]     Audit-Id: 0dfd5ad2-184d-43f7-ac45-843e99bf6992
	I0923 12:43:01.862541  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:01.862546  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:01.862550  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:01.862554  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:01.862557  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:01 GMT
	I0923 12:43:01.862681  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-915704","namespace":"kube-system","uid":"6fdd28a4-9d1c-47b1-b14c-212986f47650","resourceVersion":"1260","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f436c981b3942bad9048e7a5ca8911e5","kubernetes.io/config.mirror":"f436c981b3942bad9048e7a5ca8911e5","kubernetes.io/config.seen":"2024-09-23T12:35:14.769599203Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0923 12:43:02.059718  533789 request.go:632] Waited for 196.452098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:02.059800  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:02.059805  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.059813  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.059817  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.062871  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:02.062954  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.062974  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.062980  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.062986  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.062992  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.062996  533789 round_trippers.go:580]     Audit-Id: 1a6d20e8-c9ae-43b1-a39d-251c3c3dff5e
	I0923 12:43:02.063001  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.063141  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:02.063583  533789 pod_ready.go:93] pod "kube-scheduler-multinode-915704" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:02.063606  533789 pod_ready.go:82] duration metric: took 401.047928ms for pod "kube-scheduler-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:02.063622  533789 pod_ready.go:39] duration metric: took 14.214105378s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:43:02.063648  533789 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:43:02.063718  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:43:02.079349  533789 command_runner.go:130] > 1725
	I0923 12:43:02.079423  533789 api_server.go:72] duration metric: took 30.928798484s to wait for apiserver process to appear ...
	I0923 12:43:02.079435  533789 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:43:02.079476  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:43:02.085253  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 200:
	ok
	I0923 12:43:02.085331  533789 round_trippers.go:463] GET https://192.168.39.233:8443/version
	I0923 12:43:02.085340  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.085350  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.085358  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.086325  533789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0923 12:43:02.086345  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.086352  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.086358  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.086361  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.086364  533789 round_trippers.go:580]     Content-Length: 263
	I0923 12:43:02.086367  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.086369  533789 round_trippers.go:580]     Audit-Id: 38a072eb-eeae-4986-bcaa-cec4b4bd504b
	I0923 12:43:02.086371  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.086388  533789 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 12:43:02.086430  533789 api_server.go:141] control plane version: v1.31.1
	I0923 12:43:02.086446  533789 api_server.go:131] duration metric: took 7.005774ms to wait for apiserver health ...
	I0923 12:43:02.086455  533789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:43:02.258840  533789 request.go:632] Waited for 172.302832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:43:02.258932  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:43:02.258938  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.258946  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.258953  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.262930  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:02.262966  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.262979  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.262987  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.262993  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.262999  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.263005  533789 round_trippers.go:580]     Audit-Id: 3c239a8e-3da8-4b94-9606-dce4c9ca8924
	I0923 12:43:02.263011  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.263873  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1312"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1308","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89462 chars]
	I0923 12:43:02.266547  533789 system_pods.go:59] 12 kube-system pods found
	I0923 12:43:02.266580  533789 system_pods.go:61] "coredns-7c65d6cfc9-s5jv2" [0dc645c9-049b-41b4-abb9-efb0c3496da5] Running
	I0923 12:43:02.266586  533789 system_pods.go:61] "etcd-multinode-915704" [298e300f-3a4d-4d3c-803d-d4aa5e369e92] Running
	I0923 12:43:02.266589  533789 system_pods.go:61] "kindnet-cddh6" [f28822f1-bc2c-491a-b022-35c17323bab5] Running
	I0923 12:43:02.266593  533789 system_pods.go:61] "kindnet-kt7cw" [130be908-3588-4c06-8595-64df636abc2b] Running
	I0923 12:43:02.266596  533789 system_pods.go:61] "kindnet-lb8gc" [b3215e24-3c69-4da8-8b5e-db638532efe2] Running
	I0923 12:43:02.266600  533789 system_pods.go:61] "kube-apiserver-multinode-915704" [2c5266db-b2d2-41ac-8bf7-eda1b883d3e3] Running
	I0923 12:43:02.266606  533789 system_pods.go:61] "kube-controller-manager-multinode-915704" [b95455eb-960c-44bf-9c6d-b39459f4c498] Running
	I0923 12:43:02.266609  533789 system_pods.go:61] "kube-proxy-hgdzz" [c9ae5011-0233-4713-83c0-5bbc9829abf9] Running
	I0923 12:43:02.266612  533789 system_pods.go:61] "kube-proxy-jthg2" [5bb0bd6f-f3b5-4750-bf9c-dd24b581e10f] Running
	I0923 12:43:02.266615  533789 system_pods.go:61] "kube-proxy-rmgjt" [d5d86b98-706f-411f-8209-017ecf7d533f] Running
	I0923 12:43:02.266618  533789 system_pods.go:61] "kube-scheduler-multinode-915704" [6fdd28a4-9d1c-47b1-b14c-212986f47650] Running
	I0923 12:43:02.266623  533789 system_pods.go:61] "storage-provisioner" [ec90818c-184f-4066-a5c9-f4875d0b1354] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 12:43:02.266631  533789 system_pods.go:74] duration metric: took 180.169944ms to wait for pod list to return data ...
	I0923 12:43:02.266640  533789 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:43:02.459043  533789 request.go:632] Waited for 192.30567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:43:02.459113  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:43:02.459119  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.459129  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.459166  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.462557  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:02.462586  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.462596  533789 round_trippers.go:580]     Content-Length: 262
	I0923 12:43:02.462602  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.462607  533789 round_trippers.go:580]     Audit-Id: 17fd5512-b873-42f9-93c9-5baef6ed25f6
	I0923 12:43:02.462613  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.462619  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.462622  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.462627  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.462651  533789 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1312"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f10533a2-fd69-47ec-aa30-b82aff79df10","resourceVersion":"296","creationTimestamp":"2024-09-23T12:35:19Z"}}]}
	I0923 12:43:02.462894  533789 default_sa.go:45] found service account: "default"
	I0923 12:43:02.462919  533789 default_sa.go:55] duration metric: took 196.272508ms for default service account to be created ...
	I0923 12:43:02.462928  533789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:43:02.659535  533789 request.go:632] Waited for 196.426938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:43:02.659610  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:43:02.659618  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.659630  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.659635  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.662850  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:02.662888  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.662899  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.662905  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.662910  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.662913  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.662918  533789 round_trippers.go:580]     Audit-Id: da518312-25b8-4285-b3ff-52f806f3db30
	I0923 12:43:02.662922  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.663712  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1312"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1308","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89462 chars]
	I0923 12:43:02.666478  533789 system_pods.go:86] 12 kube-system pods found
	I0923 12:43:02.666509  533789 system_pods.go:89] "coredns-7c65d6cfc9-s5jv2" [0dc645c9-049b-41b4-abb9-efb0c3496da5] Running
	I0923 12:43:02.666516  533789 system_pods.go:89] "etcd-multinode-915704" [298e300f-3a4d-4d3c-803d-d4aa5e369e92] Running
	I0923 12:43:02.666526  533789 system_pods.go:89] "kindnet-cddh6" [f28822f1-bc2c-491a-b022-35c17323bab5] Running
	I0923 12:43:02.666532  533789 system_pods.go:89] "kindnet-kt7cw" [130be908-3588-4c06-8595-64df636abc2b] Running
	I0923 12:43:02.666541  533789 system_pods.go:89] "kindnet-lb8gc" [b3215e24-3c69-4da8-8b5e-db638532efe2] Running
	I0923 12:43:02.666546  533789 system_pods.go:89] "kube-apiserver-multinode-915704" [2c5266db-b2d2-41ac-8bf7-eda1b883d3e3] Running
	I0923 12:43:02.666552  533789 system_pods.go:89] "kube-controller-manager-multinode-915704" [b95455eb-960c-44bf-9c6d-b39459f4c498] Running
	I0923 12:43:02.666561  533789 system_pods.go:89] "kube-proxy-hgdzz" [c9ae5011-0233-4713-83c0-5bbc9829abf9] Running
	I0923 12:43:02.666567  533789 system_pods.go:89] "kube-proxy-jthg2" [5bb0bd6f-f3b5-4750-bf9c-dd24b581e10f] Running
	I0923 12:43:02.666575  533789 system_pods.go:89] "kube-proxy-rmgjt" [d5d86b98-706f-411f-8209-017ecf7d533f] Running
	I0923 12:43:02.666580  533789 system_pods.go:89] "kube-scheduler-multinode-915704" [6fdd28a4-9d1c-47b1-b14c-212986f47650] Running
	I0923 12:43:02.666591  533789 system_pods.go:89] "storage-provisioner" [ec90818c-184f-4066-a5c9-f4875d0b1354] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 12:43:02.666600  533789 system_pods.go:126] duration metric: took 203.665385ms to wait for k8s-apps to be running ...
	I0923 12:43:02.666610  533789 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:43:02.666671  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:43:02.682336  533789 system_svc.go:56] duration metric: took 15.712245ms WaitForService to wait for kubelet
	I0923 12:43:02.682370  533789 kubeadm.go:582] duration metric: took 31.531745772s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:43:02.682390  533789 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:43:02.859862  533789 request.go:632] Waited for 177.370424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes
	I0923 12:43:02.859924  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes
	I0923 12:43:02.859929  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.859936  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.859940  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.863548  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:02.863582  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.863590  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.863595  533789 round_trippers.go:580]     Audit-Id: e9ccaab6-4e60-4c06-a198-10de08dcf1ee
	I0923 12:43:02.863599  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.863603  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.863606  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.863610  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.863759  533789 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1312"},"items":[{"metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10018 chars]
	I0923 12:43:02.864330  533789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:43:02.864363  533789 node_conditions.go:123] node cpu capacity is 2
	I0923 12:43:02.864388  533789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:43:02.864392  533789 node_conditions.go:123] node cpu capacity is 2
	I0923 12:43:02.864395  533789 node_conditions.go:105] duration metric: took 182.000795ms to run NodePressure ...
	I0923 12:43:02.864415  533789 start.go:241] waiting for startup goroutines ...
	I0923 12:43:02.864423  533789 start.go:246] waiting for cluster config update ...
	I0923 12:43:02.864437  533789 start.go:255] writing updated cluster config ...
	I0923 12:43:02.867410  533789 out.go:201] 
	I0923 12:43:02.869706  533789 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:43:02.869811  533789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/config.json ...
	I0923 12:43:02.872485  533789 out.go:177] * Starting "multinode-915704-m02" worker node in "multinode-915704" cluster
	I0923 12:43:02.874551  533789 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:43:02.874601  533789 cache.go:56] Caching tarball of preloaded images
	I0923 12:43:02.874772  533789 preload.go:172] Found /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:43:02.874788  533789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:43:02.874909  533789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/config.json ...
	I0923 12:43:02.875172  533789 start.go:360] acquireMachinesLock for multinode-915704-m02: {Name:mk9742766ed80b377dab18455a5851b42572655c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:43:02.875243  533789 start.go:364] duration metric: took 45.523µs to acquireMachinesLock for "multinode-915704-m02"
	I0923 12:43:02.875266  533789 start.go:96] Skipping create...Using existing machine configuration
	I0923 12:43:02.875273  533789 fix.go:54] fixHost starting: m02
	I0923 12:43:02.875589  533789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:43:02.875637  533789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:43:02.892119  533789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36275
	I0923 12:43:02.892686  533789 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:43:02.893237  533789 main.go:141] libmachine: Using API Version  1
	I0923 12:43:02.893260  533789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:43:02.893611  533789 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:43:02.893801  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:02.893980  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetState
	I0923 12:43:02.895752  533789 fix.go:112] recreateIfNeeded on multinode-915704-m02: state=Stopped err=<nil>
	I0923 12:43:02.895779  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	W0923 12:43:02.895945  533789 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 12:43:02.897805  533789 out.go:177] * Restarting existing kvm2 VM for "multinode-915704-m02" ...
	I0923 12:43:02.899038  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .Start
	I0923 12:43:02.899244  533789 main.go:141] libmachine: (multinode-915704-m02) Ensuring networks are active...
	I0923 12:43:02.899949  533789 main.go:141] libmachine: (multinode-915704-m02) Ensuring network default is active
	I0923 12:43:02.900312  533789 main.go:141] libmachine: (multinode-915704-m02) Ensuring network mk-multinode-915704 is active
	I0923 12:43:02.900730  533789 main.go:141] libmachine: (multinode-915704-m02) Getting domain xml...
	I0923 12:43:02.901474  533789 main.go:141] libmachine: (multinode-915704-m02) Creating domain...
	I0923 12:43:04.178482  533789 main.go:141] libmachine: (multinode-915704-m02) Waiting to get IP...
	I0923 12:43:04.179466  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:04.179908  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:04.180050  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:04.179894  534127 retry.go:31] will retry after 194.461682ms: waiting for machine to come up
	I0923 12:43:04.376567  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:04.377074  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:04.377095  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:04.377041  534127 retry.go:31] will retry after 313.980456ms: waiting for machine to come up
	I0923 12:43:04.692688  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:04.693152  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:04.693181  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:04.693099  534127 retry.go:31] will retry after 372.052091ms: waiting for machine to come up
	I0923 12:43:05.066905  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:05.067467  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:05.067493  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:05.067400  534127 retry.go:31] will retry after 517.898255ms: waiting for machine to come up
	I0923 12:43:05.587278  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:05.587797  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:05.587820  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:05.587744  534127 retry.go:31] will retry after 577.41604ms: waiting for machine to come up
	I0923 12:43:06.166681  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:06.167292  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:06.167323  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:06.167225  534127 retry.go:31] will retry after 585.584403ms: waiting for machine to come up
	I0923 12:43:06.754060  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:06.754483  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:06.754509  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:06.754445  534127 retry.go:31] will retry after 916.565306ms: waiting for machine to come up
	I0923 12:43:07.672599  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:07.673022  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:07.673048  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:07.672961  534127 retry.go:31] will retry after 1.163367164s: waiting for machine to come up
	I0923 12:43:08.837923  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:08.838450  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:08.838481  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:08.838395  534127 retry.go:31] will retry after 1.723378142s: waiting for machine to come up
	I0923 12:43:10.563892  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:10.564385  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:10.564419  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:10.564317  534127 retry.go:31] will retry after 1.435511952s: waiting for machine to come up
	I0923 12:43:12.002007  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:12.002402  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:12.002446  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:12.002335  534127 retry.go:31] will retry after 2.28980358s: waiting for machine to come up
	I0923 12:43:14.294786  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:14.295296  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:14.295318  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:14.295251  534127 retry.go:31] will retry after 3.244708075s: waiting for machine to come up
	I0923 12:43:17.543676  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:17.544065  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:17.544088  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:17.544031  534127 retry.go:31] will retry after 3.435624001s: waiting for machine to come up
	I0923 12:43:20.983033  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:20.983584  533789 main.go:141] libmachine: (multinode-915704-m02) Found IP for machine: 192.168.39.118
	I0923 12:43:20.983613  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has current primary IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:20.983622  533789 main.go:141] libmachine: (multinode-915704-m02) Reserving static IP address...
	I0923 12:43:20.984009  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "multinode-915704-m02", mac: "52:54:00:38:ce:58", ip: "192.168.39.118"} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:20.984043  533789 main.go:141] libmachine: (multinode-915704-m02) Reserved static IP address: 192.168.39.118
	I0923 12:43:20.984063  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | skip adding static IP to network mk-multinode-915704 - found existing host DHCP lease matching {name: "multinode-915704-m02", mac: "52:54:00:38:ce:58", ip: "192.168.39.118"}
	I0923 12:43:20.984079  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | Getting to WaitForSSH function...
	I0923 12:43:20.984095  533789 main.go:141] libmachine: (multinode-915704-m02) Waiting for SSH to be available...
	I0923 12:43:20.986371  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:20.986706  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:20.986745  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:20.986918  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | Using SSH client type: external
	I0923 12:43:20.986945  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa (-rw-------)
	I0923 12:43:20.986968  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:43:20.986976  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | About to run SSH command:
	I0923 12:43:20.986986  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | exit 0
	I0923 12:43:21.110726  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | SSH cmd err, output: <nil>: 
	I0923 12:43:21.111181  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetConfigRaw
	I0923 12:43:21.111857  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetIP
	I0923 12:43:21.114945  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.115357  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.115388  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.115651  533789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/config.json ...
	I0923 12:43:21.115939  533789 machine.go:93] provisionDockerMachine start ...
	I0923 12:43:21.115967  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:21.116201  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.118603  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.119001  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.119042  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.119187  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.119347  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.119532  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.119620  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.119767  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:21.119948  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:21.119962  533789 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:43:21.223056  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:43:21.223100  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetMachineName
	I0923 12:43:21.223405  533789 buildroot.go:166] provisioning hostname "multinode-915704-m02"
	I0923 12:43:21.223435  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetMachineName
	I0923 12:43:21.223622  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.226312  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.226687  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.226716  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.226867  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.227062  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.227255  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.227425  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.227720  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:21.227904  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:21.227917  533789 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-915704-m02 && echo "multinode-915704-m02" | sudo tee /etc/hostname
	I0923 12:43:21.344379  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-915704-m02
	
	I0923 12:43:21.344414  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.347221  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.347590  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.347629  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.347793  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.348006  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.348220  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.348372  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.348628  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:21.348791  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:21.348808  533789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-915704-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-915704-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-915704-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:43:21.459411  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:43:21.459455  533789 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-497735/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-497735/.minikube}
	I0923 12:43:21.459481  533789 buildroot.go:174] setting up certificates
	I0923 12:43:21.459506  533789 provision.go:84] configureAuth start
	I0923 12:43:21.459526  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetMachineName
	I0923 12:43:21.459874  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetIP
	I0923 12:43:21.462864  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.463406  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.463452  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.463587  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.466184  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.466582  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.466614  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.466740  533789 provision.go:143] copyHostCerts
	I0923 12:43:21.466797  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem
	I0923 12:43:21.466864  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem, removing ...
	I0923 12:43:21.466877  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem
	I0923 12:43:21.466955  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem (1078 bytes)
	I0923 12:43:21.467057  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem
	I0923 12:43:21.467083  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem, removing ...
	I0923 12:43:21.467091  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem
	I0923 12:43:21.467132  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem (1123 bytes)
	I0923 12:43:21.467193  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem
	I0923 12:43:21.467218  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem, removing ...
	I0923 12:43:21.467227  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem
	I0923 12:43:21.467264  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem (1679 bytes)
	I0923 12:43:21.467330  533789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem org=jenkins.multinode-915704-m02 san=[127.0.0.1 192.168.39.118 localhost minikube multinode-915704-m02]
	I0923 12:43:21.693555  533789 provision.go:177] copyRemoteCerts
	I0923 12:43:21.693646  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:43:21.693679  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.696546  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.696868  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.696895  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.697060  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.697311  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.697511  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.697665  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa Username:docker}
	I0923 12:43:21.777359  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:43:21.777471  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:43:21.802409  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:43:21.802483  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0923 12:43:21.826698  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:43:21.826801  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:43:21.851165  533789 provision.go:87] duration metric: took 391.640159ms to configureAuth
	I0923 12:43:21.851199  533789 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:43:21.851471  533789 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:43:21.851513  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:21.851834  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.854632  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.855076  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.855102  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.855197  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.855415  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.855570  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.855730  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.855923  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:21.856117  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:21.856130  533789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:43:21.960436  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:43:21.960468  533789 buildroot.go:70] root file system type: tmpfs
	I0923 12:43:21.960625  533789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:43:21.960649  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.963696  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.964127  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.964157  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.964351  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.964568  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.964761  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.964917  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.965077  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:21.965283  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:21.965354  533789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.233"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:43:22.085440  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.233
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:43:22.085480  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:22.088244  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:22.088716  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:22.088747  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:22.089034  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:22.089357  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:22.089559  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:22.089753  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:22.089945  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:22.090112  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:22.090129  533789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:43:23.910473  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:43:23.910507  533789 machine.go:96] duration metric: took 2.794550939s to provisionDockerMachine
	I0923 12:43:23.910521  533789 start.go:293] postStartSetup for "multinode-915704-m02" (driver="kvm2")
	I0923 12:43:23.910532  533789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:43:23.910547  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:23.910892  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:43:23.910929  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:23.913814  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:23.914266  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:23.914297  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:23.914475  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:23.914697  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:23.914916  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:23.915168  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa Username:docker}
	I0923 12:43:24.001712  533789 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:43:24.005810  533789 command_runner.go:130] > NAME=Buildroot
	I0923 12:43:24.005836  533789 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 12:43:24.005842  533789 command_runner.go:130] > ID=buildroot
	I0923 12:43:24.005849  533789 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 12:43:24.005856  533789 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 12:43:24.005921  533789 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:43:24.005948  533789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-497735/.minikube/addons for local assets ...
	I0923 12:43:24.006026  533789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-497735/.minikube/files for local assets ...
	I0923 12:43:24.006114  533789 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem -> 5050122.pem in /etc/ssl/certs
	I0923 12:43:24.006127  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem -> /etc/ssl/certs/5050122.pem
	I0923 12:43:24.006237  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:43:24.022068  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem --> /etc/ssl/certs/5050122.pem (1708 bytes)
	I0923 12:43:24.044398  533789 start.go:296] duration metric: took 133.860153ms for postStartSetup
	I0923 12:43:24.044446  533789 fix.go:56] duration metric: took 21.169173966s for fixHost
	I0923 12:43:24.044469  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:24.047631  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.048034  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:24.048063  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.048317  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:24.048593  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:24.048754  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:24.048925  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:24.049156  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:24.049376  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:24.049393  533789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:43:24.151731  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727095404.121993109
	
	I0923 12:43:24.151771  533789 fix.go:216] guest clock: 1727095404.121993109
	I0923 12:43:24.151786  533789 fix.go:229] Guest: 2024-09-23 12:43:24.121993109 +0000 UTC Remote: 2024-09-23 12:43:24.04445047 +0000 UTC m=+89.882899320 (delta=77.542639ms)
	I0923 12:43:24.151806  533789 fix.go:200] guest clock delta is within tolerance: 77.542639ms
	I0923 12:43:24.151813  533789 start.go:83] releasing machines lock for "multinode-915704-m02", held for 21.276556268s
	I0923 12:43:24.151838  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:24.152184  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetIP
	I0923 12:43:24.155205  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.155516  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:24.155541  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.157772  533789 out.go:177] * Found network options:
	I0923 12:43:24.159419  533789 out.go:177]   - NO_PROXY=192.168.39.233
	W0923 12:43:24.160720  533789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:43:24.160761  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:24.161440  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:24.161677  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:24.161792  533789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:43:24.161836  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	W0923 12:43:24.161858  533789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:43:24.161952  533789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:43:24.161973  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:24.164777  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.164803  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.165154  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:24.165185  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.165213  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:24.165228  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.165443  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:24.165609  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:24.165616  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:24.165805  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:24.165822  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:24.165962  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:24.165957  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa Username:docker}
	I0923 12:43:24.166069  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa Username:docker}
	I0923 12:43:24.284029  533789 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0923 12:43:24.284135  533789 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 12:43:24.284189  533789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:43:24.284259  533789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:43:24.300807  533789 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 12:43:24.300890  533789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:43:24.300908  533789 start.go:495] detecting cgroup driver to use...
	I0923 12:43:24.301023  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:43:24.319247  533789 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 12:43:24.319534  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:43:24.329664  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:43:24.340185  533789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:43:24.340265  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:43:24.350666  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:43:24.361156  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:43:24.371483  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:43:24.382115  533789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:43:24.393207  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:43:24.403747  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:43:24.414080  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:43:24.424683  533789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:43:24.433981  533789 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:43:24.434036  533789 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:43:24.434085  533789 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:43:24.443633  533789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:43:24.453496  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:43:24.585257  533789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:43:24.609192  533789 start.go:495] detecting cgroup driver to use...
	I0923 12:43:24.609288  533789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:43:24.634294  533789 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 12:43:24.634321  533789 command_runner.go:130] > [Unit]
	I0923 12:43:24.634331  533789 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 12:43:24.634339  533789 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 12:43:24.634348  533789 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 12:43:24.634355  533789 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 12:43:24.634362  533789 command_runner.go:130] > StartLimitBurst=3
	I0923 12:43:24.634368  533789 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 12:43:24.634374  533789 command_runner.go:130] > [Service]
	I0923 12:43:24.634382  533789 command_runner.go:130] > Type=notify
	I0923 12:43:24.634389  533789 command_runner.go:130] > Restart=on-failure
	I0923 12:43:24.634400  533789 command_runner.go:130] > Environment=NO_PROXY=192.168.39.233
	I0923 12:43:24.634414  533789 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 12:43:24.634430  533789 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 12:43:24.634444  533789 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 12:43:24.634456  533789 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 12:43:24.634471  533789 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 12:43:24.634482  533789 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 12:43:24.634496  533789 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 12:43:24.634511  533789 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 12:43:24.634525  533789 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 12:43:24.634533  533789 command_runner.go:130] > ExecStart=
	I0923 12:43:24.634556  533789 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0923 12:43:24.634567  533789 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 12:43:24.634580  533789 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 12:43:24.634594  533789 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 12:43:24.634604  533789 command_runner.go:130] > LimitNOFILE=infinity
	I0923 12:43:24.634613  533789 command_runner.go:130] > LimitNPROC=infinity
	I0923 12:43:24.634620  533789 command_runner.go:130] > LimitCORE=infinity
	I0923 12:43:24.634630  533789 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 12:43:24.634642  533789 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 12:43:24.634652  533789 command_runner.go:130] > TasksMax=infinity
	I0923 12:43:24.634660  533789 command_runner.go:130] > TimeoutStartSec=0
	I0923 12:43:24.634673  533789 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 12:43:24.634681  533789 command_runner.go:130] > Delegate=yes
	I0923 12:43:24.634692  533789 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 12:43:24.634705  533789 command_runner.go:130] > KillMode=process
	I0923 12:43:24.634712  533789 command_runner.go:130] > [Install]
	I0923 12:43:24.634723  533789 command_runner.go:130] > WantedBy=multi-user.target
	I0923 12:43:24.634814  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:43:24.656547  533789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:43:24.678607  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:43:24.693749  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:43:24.707231  533789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:43:24.733572  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:43:24.747783  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:43:24.765745  533789 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 12:43:24.765832  533789 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:43:24.769503  533789 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 12:43:24.769646  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:43:24.778722  533789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:43:24.795361  533789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:43:24.914572  533789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:43:25.036889  533789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:43:25.036955  533789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:43:25.055098  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:43:25.170644  533789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:44:26.234721  533789 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0923 12:44:26.234776  533789 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0923 12:44:26.234801  533789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.064123192s)
	I0923 12:44:26.234894  533789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0923 12:44:26.250347  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 systemd[1]: Starting Docker Application Container Engine...
	I0923 12:44:26.250376  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.466044549Z" level=info msg="Starting up"
	I0923 12:44:26.250388  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.467558463Z" level=info msg="containerd not running, starting managed containerd"
	I0923 12:44:26.250402  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.468352110Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
	I0923 12:44:26.250416  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.495664251Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I0923 12:44:26.250438  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.515767190Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0923 12:44:26.250461  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.515914325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0923 12:44:26.250472  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516007875Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0923 12:44:26.250483  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516050723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250499  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516384302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0923 12:44:26.250510  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516483534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250541  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516683546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0923 12:44:26.250564  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516800268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250578  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516843411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0923 12:44:26.250589  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516884445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250600  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.517142642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250615  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.517424377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250641  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.519741332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0923 12:44:26.250654  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.519863033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250679  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520058313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0923 12:44:26.250698  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520109934Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0923 12:44:26.250716  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520416385Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0923 12:44:26.250731  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520546340Z" level=info msg="metadata content store policy set" policy=shared
	I0923 12:44:26.250746  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.523911761Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0923 12:44:26.250776  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.523997010Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0923 12:44:26.250792  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524014748Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0923 12:44:26.250808  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524032855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0923 12:44:26.250822  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524050629Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0923 12:44:26.250837  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524179075Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0923 12:44:26.250851  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524510950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0923 12:44:26.250867  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524615290Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0923 12:44:26.250883  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524647631Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0923 12:44:26.250918  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524662622Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0923 12:44:26.250940  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524674957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.250957  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524686603Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.250978  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524733937Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.250998  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524749023Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.251017  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524762887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.251034  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524777825Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.251059  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524798426Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.251095  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524814763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.251106  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524842641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251119  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524855948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251131  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524866824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251143  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524877864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251155  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524888510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251167  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524899401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251178  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524909731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251190  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524927140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251202  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524939393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251218  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524952590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251231  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524962502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251243  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524973115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251255  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524983575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251267  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524996839Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0923 12:44:26.251279  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525020872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251291  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525031620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251303  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525043318Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0923 12:44:26.251320  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525116754Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0923 12:44:26.251336  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525139796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0923 12:44:26.251349  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525150902Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0923 12:44:26.251365  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525166046Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0923 12:44:26.251379  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525175859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251391  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525186773Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0923 12:44:26.251403  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525203359Z" level=info msg="NRI interface is disabled by configuration."
	I0923 12:44:26.251414  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526104835Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0923 12:44:26.251424  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526242000Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0923 12:44:26.251433  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526369097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0923 12:44:26.251441  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526899015Z" level=info msg="containerd successfully booted in 0.032473s"
	I0923 12:44:26.251450  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.500430476Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0923 12:44:26.251460  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.525855967Z" level=info msg="Loading containers: start."
	I0923 12:44:26.251481  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.672424233Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0923 12:44:26.251495  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.769348274Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0923 12:44:26.251506  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.829820116Z" level=info msg="Loading containers: done."
	I0923 12:44:26.251521  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.843805067Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0923 12:44:26.251533  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.843946913Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0923 12:44:26.251547  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.844043912Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	I0923 12:44:26.251558  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.844468504Z" level=info msg="Daemon has completed initialization"
	I0923 12:44:26.251572  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.878906159Z" level=info msg="API listen on /var/run/docker.sock"
	I0923 12:44:26.251582  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.879022375Z" level=info msg="API listen on [::]:2376"
	I0923 12:44:26.251592  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 systemd[1]: Started Docker Application Container Engine.
	I0923 12:44:26.251601  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0923 12:44:26.251612  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.155657450Z" level=info msg="Processing signal 'terminated'"
	I0923 12:44:26.251625  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157487813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0923 12:44:26.251641  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157735426Z" level=info msg="Daemon shutdown complete"
	I0923 12:44:26.251672  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157814344Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0923 12:44:26.251686  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157847761Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0923 12:44:26.251695  533789 command_runner.go:130] > Sep 23 12:43:26 multinode-915704-m02 systemd[1]: docker.service: Deactivated successfully.
	I0923 12:44:26.251707  533789 command_runner.go:130] > Sep 23 12:43:26 multinode-915704-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0923 12:44:26.251721  533789 command_runner.go:130] > Sep 23 12:43:26 multinode-915704-m02 systemd[1]: Starting Docker Application Container Engine...
	I0923 12:44:26.251736  533789 command_runner.go:130] > Sep 23 12:43:26 multinode-915704-m02 dockerd[833]: time="2024-09-23T12:43:26.191330905Z" level=info msg="Starting up"
	I0923 12:44:26.251755  533789 command_runner.go:130] > Sep 23 12:44:26 multinode-915704-m02 dockerd[833]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0923 12:44:26.251769  533789 command_runner.go:130] > Sep 23 12:44:26 multinode-915704-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0923 12:44:26.251783  533789 command_runner.go:130] > Sep 23 12:44:26 multinode-915704-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0923 12:44:26.251797  533789 command_runner.go:130] > Sep 23 12:44:26 multinode-915704-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0923 12:44:26.258302  533789 out.go:201] 
	W0923 12:44:26.259744  533789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 23 12:43:22 multinode-915704-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.466044549Z" level=info msg="Starting up"
	Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.467558463Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.468352110Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.495664251Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.515767190Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.515914325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516007875Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516050723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516384302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516483534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516683546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516800268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516843411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516884445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.517142642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.517424377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.519741332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.519863033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520058313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520109934Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520416385Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520546340Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.523911761Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.523997010Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524014748Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524032855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524050629Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524179075Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524510950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524615290Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524647631Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524662622Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524674957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524686603Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524733937Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524749023Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524762887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524777825Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524798426Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524814763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524842641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524855948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524866824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524877864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524888510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524899401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524909731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524927140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524939393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524952590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524962502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524973115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524983575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524996839Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525020872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525031620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525043318Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525116754Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525139796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525150902Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525166046Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525175859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525186773Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525203359Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526104835Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526242000Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526369097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526899015Z" level=info msg="containerd successfully booted in 0.032473s"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.500430476Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.525855967Z" level=info msg="Loading containers: start."
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.672424233Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.769348274Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.829820116Z" level=info msg="Loading containers: done."
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.843805067Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.843946913Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.844043912Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.844468504Z" level=info msg="Daemon has completed initialization"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.878906159Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.879022375Z" level=info msg="API listen on [::]:2376"
	Sep 23 12:43:23 multinode-915704-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 23 12:43:25 multinode-915704-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.155657450Z" level=info msg="Processing signal 'terminated'"
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157487813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157735426Z" level=info msg="Daemon shutdown complete"
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157814344Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157847761Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 12:43:26 multinode-915704-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 12:43:26 multinode-915704-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 12:43:26 multinode-915704-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 12:43:26 multinode-915704-m02 dockerd[833]: time="2024-09-23T12:43:26.191330905Z" level=info msg="Starting up"
	Sep 23 12:44:26 multinode-915704-m02 dockerd[833]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 23 12:44:26 multinode-915704-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 23 12:44:26 multinode-915704-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 23 12:44:26 multinode-915704-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 23 12:43:22 multinode-915704-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.466044549Z" level=info msg="Starting up"
	Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.467558463Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.468352110Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.495664251Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.515767190Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.515914325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516007875Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516050723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516384302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516483534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516683546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516800268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516843411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516884445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.517142642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.517424377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.519741332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.519863033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520058313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520109934Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520416385Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520546340Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.523911761Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.523997010Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524014748Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524032855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524050629Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524179075Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524510950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524615290Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524647631Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524662622Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524674957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524686603Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524733937Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524749023Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524762887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524777825Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524798426Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524814763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524842641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524855948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524866824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524877864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524888510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524899401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524909731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524927140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524939393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524952590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524962502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524973115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524983575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524996839Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525020872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525031620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525043318Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525116754Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525139796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525150902Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525166046Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525175859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525186773Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525203359Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526104835Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526242000Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526369097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526899015Z" level=info msg="containerd successfully booted in 0.032473s"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.500430476Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.525855967Z" level=info msg="Loading containers: start."
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.672424233Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.769348274Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.829820116Z" level=info msg="Loading containers: done."
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.843805067Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.843946913Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.844043912Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.844468504Z" level=info msg="Daemon has completed initialization"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.878906159Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.879022375Z" level=info msg="API listen on [::]:2376"
	Sep 23 12:43:23 multinode-915704-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 23 12:43:25 multinode-915704-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.155657450Z" level=info msg="Processing signal 'terminated'"
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157487813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157735426Z" level=info msg="Daemon shutdown complete"
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157814344Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157847761Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 12:43:26 multinode-915704-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 12:43:26 multinode-915704-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 12:43:26 multinode-915704-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 12:43:26 multinode-915704-m02 dockerd[833]: time="2024-09-23T12:43:26.191330905Z" level=info msg="Starting up"
	Sep 23 12:44:26 multinode-915704-m02 dockerd[833]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 23 12:44:26 multinode-915704-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 23 12:44:26 multinode-915704-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 23 12:44:26 multinode-915704-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0923 12:44:26.259792  533789 out.go:270] * 
	* 
	W0923 12:44:26.260711  533789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 12:44:26.262242  533789 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-915704 --wait=true -v=8 --alsologtostderr --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-915704 -n multinode-915704
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-915704 logs -n 25: (1.291061307s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-915704 cp multinode-915704-m02:/home/docker/cp-test.txt                       | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704:/home/docker/cp-test_multinode-915704-m02_multinode-915704.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-915704 ssh -n                                                                 | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-915704 ssh -n multinode-915704 sudo cat                                       | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | /home/docker/cp-test_multinode-915704-m02_multinode-915704.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-915704 cp multinode-915704-m02:/home/docker/cp-test.txt                       | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704-m03:/home/docker/cp-test_multinode-915704-m02_multinode-915704-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-915704 ssh -n                                                                 | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-915704 ssh -n multinode-915704-m03 sudo cat                                   | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | /home/docker/cp-test_multinode-915704-m02_multinode-915704-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-915704 cp testdata/cp-test.txt                                                | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-915704 ssh -n                                                                 | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-915704 cp multinode-915704-m03:/home/docker/cp-test.txt                       | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile750648462/001/cp-test_multinode-915704-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-915704 ssh -n                                                                 | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-915704 cp multinode-915704-m03:/home/docker/cp-test.txt                       | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704:/home/docker/cp-test_multinode-915704-m03_multinode-915704.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-915704 ssh -n                                                                 | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-915704 ssh -n multinode-915704 sudo cat                                       | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | /home/docker/cp-test_multinode-915704-m03_multinode-915704.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-915704 cp multinode-915704-m03:/home/docker/cp-test.txt                       | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704-m02:/home/docker/cp-test_multinode-915704-m03_multinode-915704-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-915704 ssh -n                                                                 | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | multinode-915704-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-915704 ssh -n multinode-915704-m02 sudo cat                                   | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	|         | /home/docker/cp-test_multinode-915704-m03_multinode-915704-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-915704 node stop m03                                                          | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:37 UTC |
	| node    | multinode-915704 node start                                                             | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:37 UTC | 23 Sep 24 12:38 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-915704                                                                | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:38 UTC |                     |
	| stop    | -p multinode-915704                                                                     | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:38 UTC |
	| start   | -p multinode-915704                                                                     | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:38 UTC | 23 Sep 24 12:41 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-915704                                                                | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC |                     |
	| node    | multinode-915704 node delete                                                            | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC | 23 Sep 24 12:41 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-915704 stop                                                                   | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC | 23 Sep 24 12:41 UTC |
	| start   | -p multinode-915704                                                                     | multinode-915704 | jenkins | v1.34.0 | 23 Sep 24 12:41 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:41:54
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:41:54.199027  533789 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:41:54.199273  533789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:41:54.199282  533789 out.go:358] Setting ErrFile to fd 2...
	I0923 12:41:54.199286  533789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:41:54.199488  533789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	I0923 12:41:54.200051  533789 out.go:352] Setting JSON to false
	I0923 12:41:54.201083  533789 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8656,"bootTime":1727086658,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:41:54.201204  533789 start.go:139] virtualization: kvm guest
	I0923 12:41:54.203731  533789 out.go:177] * [multinode-915704] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:41:54.205541  533789 notify.go:220] Checking for updates...
	I0923 12:41:54.205595  533789 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:41:54.207248  533789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:41:54.208567  533789 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 12:41:54.209811  533789 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	I0923 12:41:54.211368  533789 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:41:54.212830  533789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:41:54.214514  533789 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:41:54.215062  533789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:41:54.215136  533789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:41:54.230668  533789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40141
	I0923 12:41:54.231212  533789 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:41:54.231859  533789 main.go:141] libmachine: Using API Version  1
	I0923 12:41:54.231881  533789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:41:54.232332  533789 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:41:54.232529  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:41:54.232839  533789 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:41:54.233199  533789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:41:54.233247  533789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:41:54.248755  533789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I0923 12:41:54.249350  533789 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:41:54.249966  533789 main.go:141] libmachine: Using API Version  1
	I0923 12:41:54.249995  533789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:41:54.250378  533789 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:41:54.250588  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:41:54.287454  533789 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 12:41:54.288751  533789 start.go:297] selected driver: kvm2
	I0923 12:41:54.288772  533789 start.go:901] validating driver "kvm2" against &{Name:multinode-915704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-915704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:41:54.288917  533789 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:41:54.289279  533789 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:41:54.289376  533789 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-497735/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 12:41:54.305488  533789 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 12:41:54.306251  533789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:41:54.306289  533789 cni.go:84] Creating CNI manager for ""
	I0923 12:41:54.306342  533789 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0923 12:41:54.306410  533789 start.go:340] cluster config:
	{Name:multinode-915704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-915704 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-d
river-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:41:54.306545  533789 iso.go:125] acquiring lock: {Name:mkc30b88bda541d89938b3c13430927ceb85d23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:41:54.309205  533789 out.go:177] * Starting "multinode-915704" primary control-plane node in "multinode-915704" cluster
	I0923 12:41:54.310716  533789 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:41:54.310780  533789 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 12:41:54.310793  533789 cache.go:56] Caching tarball of preloaded images
	I0923 12:41:54.310893  533789 preload.go:172] Found /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:41:54.310908  533789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:41:54.311032  533789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/config.json ...
	I0923 12:41:54.311255  533789 start.go:360] acquireMachinesLock for multinode-915704: {Name:mk9742766ed80b377dab18455a5851b42572655c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:41:54.311309  533789 start.go:364] duration metric: took 29.682µs to acquireMachinesLock for "multinode-915704"
	I0923 12:41:54.311333  533789 start.go:96] Skipping create...Using existing machine configuration
	I0923 12:41:54.311344  533789 fix.go:54] fixHost starting: 
	I0923 12:41:54.311619  533789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:41:54.311656  533789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:41:54.331078  533789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38489
	I0923 12:41:54.331512  533789 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:41:54.332046  533789 main.go:141] libmachine: Using API Version  1
	I0923 12:41:54.332073  533789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:41:54.332523  533789 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:41:54.332817  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:41:54.332982  533789 main.go:141] libmachine: (multinode-915704) Calling .GetState
	I0923 12:41:54.335099  533789 fix.go:112] recreateIfNeeded on multinode-915704: state=Stopped err=<nil>
	I0923 12:41:54.335127  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	W0923 12:41:54.335293  533789 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 12:41:54.337325  533789 out.go:177] * Restarting existing kvm2 VM for "multinode-915704" ...
	I0923 12:41:54.338687  533789 main.go:141] libmachine: (multinode-915704) Calling .Start
	I0923 12:41:54.338938  533789 main.go:141] libmachine: (multinode-915704) Ensuring networks are active...
	I0923 12:41:54.339898  533789 main.go:141] libmachine: (multinode-915704) Ensuring network default is active
	I0923 12:41:54.340357  533789 main.go:141] libmachine: (multinode-915704) Ensuring network mk-multinode-915704 is active
	I0923 12:41:54.340903  533789 main.go:141] libmachine: (multinode-915704) Getting domain xml...
	I0923 12:41:54.341691  533789 main.go:141] libmachine: (multinode-915704) Creating domain...
	I0923 12:41:55.616048  533789 main.go:141] libmachine: (multinode-915704) Waiting to get IP...
	I0923 12:41:55.616984  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:55.617395  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:55.617511  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:55.617404  533824 retry.go:31] will retry after 204.239914ms: waiting for machine to come up
	I0923 12:41:55.822864  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:55.823348  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:55.823374  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:55.823320  533824 retry.go:31] will retry after 370.145895ms: waiting for machine to come up
	I0923 12:41:56.195174  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:56.195593  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:56.195623  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:56.195561  533824 retry.go:31] will retry after 364.424797ms: waiting for machine to come up
	I0923 12:41:56.562145  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:56.562616  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:56.562644  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:56.562545  533824 retry.go:31] will retry after 573.619472ms: waiting for machine to come up
	I0923 12:41:57.137456  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:57.137942  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:57.137966  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:57.137902  533824 retry.go:31] will retry after 504.492204ms: waiting for machine to come up
	I0923 12:41:57.643695  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:57.644065  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:57.644096  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:57.644017  533824 retry.go:31] will retry after 843.141242ms: waiting for machine to come up
	I0923 12:41:58.488971  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:58.489338  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:58.489365  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:58.489290  533824 retry.go:31] will retry after 987.20219ms: waiting for machine to come up
	I0923 12:41:59.478212  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:41:59.478655  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:41:59.478679  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:41:59.478620  533824 retry.go:31] will retry after 994.73521ms: waiting for machine to come up
	I0923 12:42:00.474739  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:00.475157  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:42:00.475179  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:42:00.475128  533824 retry.go:31] will retry after 1.379660959s: waiting for machine to come up
	I0923 12:42:01.856860  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:01.857506  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:42:01.857536  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:42:01.857446  533824 retry.go:31] will retry after 1.430231424s: waiting for machine to come up
	I0923 12:42:03.290730  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:03.291348  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:42:03.291378  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:42:03.291298  533824 retry.go:31] will retry after 2.739683757s: waiting for machine to come up
	I0923 12:42:06.034026  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:06.034467  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:42:06.034495  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:42:06.034422  533824 retry.go:31] will retry after 3.019160637s: waiting for machine to come up
	I0923 12:42:09.057639  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:09.057984  533789 main.go:141] libmachine: (multinode-915704) DBG | unable to find current IP address of domain multinode-915704 in network mk-multinode-915704
	I0923 12:42:09.058014  533789 main.go:141] libmachine: (multinode-915704) DBG | I0923 12:42:09.057933  533824 retry.go:31] will retry after 4.048216952s: waiting for machine to come up
	I0923 12:42:13.111025  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.111500  533789 main.go:141] libmachine: (multinode-915704) Found IP for machine: 192.168.39.233
	I0923 12:42:13.111522  533789 main.go:141] libmachine: (multinode-915704) Reserving static IP address...
	I0923 12:42:13.111539  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has current primary IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.112051  533789 main.go:141] libmachine: (multinode-915704) Reserved static IP address: 192.168.39.233
	I0923 12:42:13.112081  533789 main.go:141] libmachine: (multinode-915704) Waiting for SSH to be available...
	I0923 12:42:13.112102  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "multinode-915704", mac: "52:54:00:1f:99:2b", ip: "192.168.39.233"} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.112140  533789 main.go:141] libmachine: (multinode-915704) DBG | skip adding static IP to network mk-multinode-915704 - found existing host DHCP lease matching {name: "multinode-915704", mac: "52:54:00:1f:99:2b", ip: "192.168.39.233"}
	I0923 12:42:13.112169  533789 main.go:141] libmachine: (multinode-915704) DBG | Getting to WaitForSSH function...
	I0923 12:42:13.114066  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.114396  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.114423  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.114635  533789 main.go:141] libmachine: (multinode-915704) DBG | Using SSH client type: external
	I0923 12:42:13.114659  533789 main.go:141] libmachine: (multinode-915704) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa (-rw-------)
	I0923 12:42:13.114683  533789 main.go:141] libmachine: (multinode-915704) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:42:13.114694  533789 main.go:141] libmachine: (multinode-915704) DBG | About to run SSH command:
	I0923 12:42:13.114706  533789 main.go:141] libmachine: (multinode-915704) DBG | exit 0
	I0923 12:42:13.243522  533789 main.go:141] libmachine: (multinode-915704) DBG | SSH cmd err, output: <nil>: 
	I0923 12:42:13.244183  533789 main.go:141] libmachine: (multinode-915704) Calling .GetConfigRaw
	I0923 12:42:13.244988  533789 main.go:141] libmachine: (multinode-915704) Calling .GetIP
	I0923 12:42:13.247814  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.248216  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.248243  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.248589  533789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/config.json ...
	I0923 12:42:13.249004  533789 machine.go:93] provisionDockerMachine start ...
	I0923 12:42:13.249056  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:13.249312  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.252129  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.252565  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.252595  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.252717  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:13.252910  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.253091  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.253284  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:13.253422  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:13.253625  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:13.253635  533789 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:42:13.362828  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:42:13.362864  533789 main.go:141] libmachine: (multinode-915704) Calling .GetMachineName
	I0923 12:42:13.363151  533789 buildroot.go:166] provisioning hostname "multinode-915704"
	I0923 12:42:13.363179  533789 main.go:141] libmachine: (multinode-915704) Calling .GetMachineName
	I0923 12:42:13.363371  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.366807  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.367279  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.367305  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.367451  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:13.367646  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.367862  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.368011  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:13.368200  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:13.368376  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:13.368388  533789 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-915704 && echo "multinode-915704" | sudo tee /etc/hostname
	I0923 12:42:13.488528  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-915704
	
	I0923 12:42:13.488570  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.491426  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.491798  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.491825  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.491983  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:13.492172  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.492342  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.492622  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:13.492823  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:13.493035  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:13.493054  533789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-915704' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-915704/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-915704' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:42:13.606717  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:42:13.606746  533789 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-497735/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-497735/.minikube}
	I0923 12:42:13.606788  533789 buildroot.go:174] setting up certificates
	I0923 12:42:13.606799  533789 provision.go:84] configureAuth start
	I0923 12:42:13.606809  533789 main.go:141] libmachine: (multinode-915704) Calling .GetMachineName
	I0923 12:42:13.607174  533789 main.go:141] libmachine: (multinode-915704) Calling .GetIP
	I0923 12:42:13.609974  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.610464  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.610494  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.610677  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.613122  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.613510  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.613538  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.613636  533789 provision.go:143] copyHostCerts
	I0923 12:42:13.613666  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem
	I0923 12:42:13.613698  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem, removing ...
	I0923 12:42:13.613717  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem
	I0923 12:42:13.613793  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem (1078 bytes)
	I0923 12:42:13.613907  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem
	I0923 12:42:13.613930  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem, removing ...
	I0923 12:42:13.613938  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem
	I0923 12:42:13.613968  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem (1123 bytes)
	I0923 12:42:13.614013  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem
	I0923 12:42:13.614030  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem, removing ...
	I0923 12:42:13.614035  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem
	I0923 12:42:13.614065  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem (1679 bytes)
	I0923 12:42:13.614119  533789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem org=jenkins.multinode-915704 san=[127.0.0.1 192.168.39.233 localhost minikube multinode-915704]
	I0923 12:42:13.746597  533789 provision.go:177] copyRemoteCerts
	I0923 12:42:13.746679  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:42:13.746707  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.749582  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.749961  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.749993  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.750271  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:13.750484  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.750660  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:13.750881  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa Username:docker}
	I0923 12:42:13.832378  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:42:13.832461  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:42:13.854985  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:42:13.855064  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0923 12:42:13.879070  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:42:13.879165  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:42:13.901573  533789 provision.go:87] duration metric: took 294.755765ms to configureAuth
	I0923 12:42:13.901609  533789 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:42:13.901891  533789 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:42:13.901921  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:13.902216  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:13.904891  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.905423  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:13.905444  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:13.905780  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:13.906002  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.906175  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:13.906326  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:13.906500  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:13.906711  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:13.906726  533789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:42:14.016539  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:42:14.016567  533789 buildroot.go:70] root file system type: tmpfs
	I0923 12:42:14.016689  533789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:42:14.016707  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:14.019216  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:14.019648  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:14.019673  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:14.019825  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:14.020017  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:14.020150  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:14.020295  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:14.020478  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:14.020667  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:14.020730  533789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:42:14.139367  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:42:14.139398  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:14.142782  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:14.143214  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:14.143240  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:14.143432  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:14.143649  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:14.143815  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:14.143946  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:14.144120  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:14.144291  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:14.144309  533789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:42:16.009085  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:42:16.009130  533789 machine.go:96] duration metric: took 2.760085923s to provisionDockerMachine
	I0923 12:42:16.009145  533789 start.go:293] postStartSetup for "multinode-915704" (driver="kvm2")
	I0923 12:42:16.009171  533789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:42:16.009203  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:16.009522  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:42:16.009552  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:16.012560  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.012990  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:16.013016  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.013197  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:16.013463  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:16.013662  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:16.013824  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa Username:docker}
	I0923 12:42:16.098463  533789 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:42:16.102363  533789 command_runner.go:130] > NAME=Buildroot
	I0923 12:42:16.102390  533789 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 12:42:16.102397  533789 command_runner.go:130] > ID=buildroot
	I0923 12:42:16.102405  533789 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 12:42:16.102411  533789 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 12:42:16.102492  533789 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:42:16.102517  533789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-497735/.minikube/addons for local assets ...
	I0923 12:42:16.102592  533789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-497735/.minikube/files for local assets ...
	I0923 12:42:16.102687  533789 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem -> 5050122.pem in /etc/ssl/certs
	I0923 12:42:16.102700  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem -> /etc/ssl/certs/5050122.pem
	I0923 12:42:16.102845  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:42:16.113454  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem --> /etc/ssl/certs/5050122.pem (1708 bytes)
	I0923 12:42:16.138217  533789 start.go:296] duration metric: took 129.055498ms for postStartSetup
	I0923 12:42:16.138266  533789 fix.go:56] duration metric: took 21.82692205s for fixHost
	I0923 12:42:16.138288  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:16.140945  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.141324  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:16.141356  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.141503  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:16.141728  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:16.141871  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:16.142048  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:16.142264  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:42:16.142500  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.233 22 <nil> <nil>}
	I0923 12:42:16.142563  533789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:42:16.251580  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727095336.230269678
	
	I0923 12:42:16.251609  533789 fix.go:216] guest clock: 1727095336.230269678
	I0923 12:42:16.251619  533789 fix.go:229] Guest: 2024-09-23 12:42:16.230269678 +0000 UTC Remote: 2024-09-23 12:42:16.138269746 +0000 UTC m=+21.976718596 (delta=91.999932ms)
	I0923 12:42:16.251647  533789 fix.go:200] guest clock delta is within tolerance: 91.999932ms
	I0923 12:42:16.251655  533789 start.go:83] releasing machines lock for "multinode-915704", held for 21.940334209s
	I0923 12:42:16.251699  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:16.251978  533789 main.go:141] libmachine: (multinode-915704) Calling .GetIP
	I0923 12:42:16.254836  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.255280  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:16.255316  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.255454  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:16.256095  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:16.256317  533789 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:42:16.256412  533789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:42:16.256476  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:16.256519  533789 ssh_runner.go:195] Run: cat /version.json
	I0923 12:42:16.256546  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:42:16.259190  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.259449  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.259668  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:16.259702  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.259731  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:16.259746  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:16.259931  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:16.260082  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:42:16.260109  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:16.260233  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:16.260313  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:42:16.260393  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa Username:docker}
	I0923 12:42:16.260439  533789 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:42:16.260536  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa Username:docker}
	I0923 12:42:16.339826  533789 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 12:42:16.381656  533789 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0923 12:42:16.382565  533789 ssh_runner.go:195] Run: systemctl --version
	I0923 12:42:16.388344  533789 command_runner.go:130] > systemd 252 (252)
	I0923 12:42:16.388388  533789 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0923 12:42:16.388453  533789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:42:16.393493  533789 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 12:42:16.393543  533789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:42:16.393605  533789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:42:16.408916  533789 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 12:42:16.409146  533789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:42:16.409168  533789 start.go:495] detecting cgroup driver to use...
	I0923 12:42:16.409326  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:42:16.426571  533789 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 12:42:16.426851  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:42:16.436866  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:42:16.446911  533789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:42:16.446995  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:42:16.457035  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:42:16.467429  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:42:16.477499  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:42:16.487596  533789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:42:16.497930  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:42:16.507816  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:42:16.517902  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:42:16.527754  533789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:42:16.536705  533789 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:42:16.536769  533789 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:42:16.536829  533789 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:42:16.546547  533789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:42:16.555715  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:16.665221  533789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:42:16.688142  533789 start.go:495] detecting cgroup driver to use...
	I0923 12:42:16.688239  533789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:42:16.702081  533789 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 12:42:16.702103  533789 command_runner.go:130] > [Unit]
	I0923 12:42:16.702110  533789 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 12:42:16.702115  533789 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 12:42:16.702121  533789 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 12:42:16.702125  533789 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 12:42:16.702131  533789 command_runner.go:130] > StartLimitBurst=3
	I0923 12:42:16.702134  533789 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 12:42:16.702138  533789 command_runner.go:130] > [Service]
	I0923 12:42:16.702142  533789 command_runner.go:130] > Type=notify
	I0923 12:42:16.702147  533789 command_runner.go:130] > Restart=on-failure
	I0923 12:42:16.702159  533789 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 12:42:16.702169  533789 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 12:42:16.702179  533789 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 12:42:16.702188  533789 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 12:42:16.702197  533789 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 12:42:16.702204  533789 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 12:42:16.702230  533789 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 12:42:16.702243  533789 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 12:42:16.702254  533789 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 12:42:16.702260  533789 command_runner.go:130] > ExecStart=
	I0923 12:42:16.702282  533789 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0923 12:42:16.702294  533789 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 12:42:16.702304  533789 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 12:42:16.702317  533789 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 12:42:16.702325  533789 command_runner.go:130] > LimitNOFILE=infinity
	I0923 12:42:16.702341  533789 command_runner.go:130] > LimitNPROC=infinity
	I0923 12:42:16.702349  533789 command_runner.go:130] > LimitCORE=infinity
	I0923 12:42:16.702354  533789 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 12:42:16.702359  533789 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 12:42:16.702363  533789 command_runner.go:130] > TasksMax=infinity
	I0923 12:42:16.702366  533789 command_runner.go:130] > TimeoutStartSec=0
	I0923 12:42:16.702372  533789 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 12:42:16.702375  533789 command_runner.go:130] > Delegate=yes
	I0923 12:42:16.702380  533789 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 12:42:16.702384  533789 command_runner.go:130] > KillMode=process
	I0923 12:42:16.702388  533789 command_runner.go:130] > [Install]
	I0923 12:42:16.702397  533789 command_runner.go:130] > WantedBy=multi-user.target
	I0923 12:42:16.702469  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:42:16.715509  533789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:42:16.732245  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:42:16.744842  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:42:16.757500  533789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:42:16.784826  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:42:16.798210  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:42:16.815435  533789 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 12:42:16.815728  533789 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:42:16.819203  533789 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 12:42:16.819330  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:42:16.828230  533789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:42:16.844054  533789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:42:16.955542  533789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:42:17.077008  533789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:42:17.077189  533789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:42:17.093712  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:17.201145  533789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:42:19.616419  533789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.415229229s)
	I0923 12:42:19.616515  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 12:42:19.630219  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:42:19.643847  533789 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 12:42:19.760184  533789 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 12:42:19.877347  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:19.999483  533789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 12:42:20.015640  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:42:20.028927  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:20.159732  533789 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 12:42:20.232463  533789 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 12:42:20.232537  533789 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 12:42:20.237657  533789 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 12:42:20.237701  533789 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 12:42:20.237712  533789 command_runner.go:130] > Device: 0,22	Inode: 788         Links: 1
	I0923 12:42:20.237722  533789 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0923 12:42:20.237731  533789 command_runner.go:130] > Access: 2024-09-23 12:42:20.152105274 +0000
	I0923 12:42:20.237739  533789 command_runner.go:130] > Modify: 2024-09-23 12:42:20.152105274 +0000
	I0923 12:42:20.237746  533789 command_runner.go:130] > Change: 2024-09-23 12:42:20.155106259 +0000
	I0923 12:42:20.237752  533789 command_runner.go:130] >  Birth: -
	I0923 12:42:20.237980  533789 start.go:563] Will wait 60s for crictl version
	I0923 12:42:20.238067  533789 ssh_runner.go:195] Run: which crictl
	I0923 12:42:20.245328  533789 command_runner.go:130] > /usr/bin/crictl
	I0923 12:42:20.245417  533789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:42:20.280616  533789 command_runner.go:130] > Version:  0.1.0
	I0923 12:42:20.280646  533789 command_runner.go:130] > RuntimeName:  docker
	I0923 12:42:20.280653  533789 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 12:42:20.280661  533789 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 12:42:20.280725  533789 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 12:42:20.280795  533789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:42:20.304475  533789 command_runner.go:130] > 27.3.0
	I0923 12:42:20.304587  533789 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:42:20.323643  533789 command_runner.go:130] > 27.3.0
	I0923 12:42:20.326566  533789 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 12:42:20.326616  533789 main.go:141] libmachine: (multinode-915704) Calling .GetIP
	I0923 12:42:20.329410  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:20.329811  533789 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:42:04 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:42:20.329831  533789 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:42:20.330055  533789 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0923 12:42:20.333948  533789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:42:20.346048  533789 kubeadm.go:883] updating cluster {Name:multinode-915704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-915704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb
:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:42:20.346249  533789 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:42:20.346300  533789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:42:20.362481  533789 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 12:42:20.362520  533789 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 12:42:20.362526  533789 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 12:42:20.362531  533789 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 12:42:20.362536  533789 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0923 12:42:20.362544  533789 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 12:42:20.362558  533789 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 12:42:20.362562  533789 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 12:42:20.362567  533789 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:42:20.362571  533789 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0923 12:42:20.363429  533789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0923 12:42:20.363446  533789 docker.go:615] Images already preloaded, skipping extraction
	I0923 12:42:20.363514  533789 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:42:20.379264  533789 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 12:42:20.379289  533789 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 12:42:20.379296  533789 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 12:42:20.379304  533789 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 12:42:20.379310  533789 command_runner.go:130] > kindest/kindnetd:v20240813-c6f155d6
	I0923 12:42:20.379317  533789 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 12:42:20.379327  533789 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 12:42:20.379334  533789 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 12:42:20.379342  533789 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 12:42:20.379352  533789 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0923 12:42:20.379385  533789 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	kindest/kindnetd:v20240813-c6f155d6
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0923 12:42:20.379404  533789 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:42:20.379417  533789 kubeadm.go:934] updating node { 192.168.39.233 8443 v1.31.1 docker true true} ...
	I0923 12:42:20.379539  533789 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-915704 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-915704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:42:20.379604  533789 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 12:42:20.425812  533789 command_runner.go:130] > cgroupfs
	I0923 12:42:20.427137  533789 cni.go:84] Creating CNI manager for ""
	I0923 12:42:20.427160  533789 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0923 12:42:20.427173  533789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 12:42:20.427204  533789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.233 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-915704 NodeName:multinode-915704 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:42:20.427358  533789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-915704"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:42:20.427440  533789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:42:20.437270  533789 command_runner.go:130] > kubeadm
	I0923 12:42:20.437303  533789 command_runner.go:130] > kubectl
	I0923 12:42:20.437310  533789 command_runner.go:130] > kubelet
	I0923 12:42:20.437335  533789 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:42:20.437391  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 12:42:20.446462  533789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0923 12:42:20.462889  533789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:42:20.478146  533789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0923 12:42:20.495258  533789 ssh_runner.go:195] Run: grep 192.168.39.233	control-plane.minikube.internal$ /etc/hosts
	I0923 12:42:20.498869  533789 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:42:20.510578  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:20.624268  533789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:42:20.641714  533789 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704 for IP: 192.168.39.233
	I0923 12:42:20.641737  533789 certs.go:194] generating shared ca certs ...
	I0923 12:42:20.641757  533789 certs.go:226] acquiring lock for ca certs: {Name:mk368fdda7ea812502dc0809d673a3fd993c0e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:42:20.641971  533789 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.key
	I0923 12:42:20.642028  533789 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.key
	I0923 12:42:20.642040  533789 certs.go:256] generating profile certs ...
	I0923 12:42:20.642165  533789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/client.key
	I0923 12:42:20.642251  533789 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/apiserver.key.a42e38d5
	I0923 12:42:20.642300  533789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/proxy-client.key
	I0923 12:42:20.642318  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 12:42:20.642340  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0923 12:42:20.642367  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 12:42:20.642382  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 12:42:20.642396  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 12:42:20.642412  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 12:42:20.642430  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 12:42:20.642454  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 12:42:20.642521  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/505012.pem (1338 bytes)
	W0923 12:42:20.642552  533789 certs.go:480] ignoring /home/jenkins/minikube-integration/19690-497735/.minikube/certs/505012_empty.pem, impossibly tiny 0 bytes
	I0923 12:42:20.642563  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem (1675 bytes)
	I0923 12:42:20.642587  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem (1078 bytes)
	I0923 12:42:20.642618  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem (1123 bytes)
	I0923 12:42:20.642642  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem (1679 bytes)
	I0923 12:42:20.642681  533789 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem (1708 bytes)
	I0923 12:42:20.642718  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem -> /usr/share/ca-certificates/5050122.pem
	I0923 12:42:20.642733  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:42:20.642745  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/505012.pem -> /usr/share/ca-certificates/505012.pem
	I0923 12:42:20.643544  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 12:42:20.670290  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 12:42:20.695310  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 12:42:20.720071  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 12:42:20.746714  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 12:42:20.771697  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 12:42:20.799693  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 12:42:20.829407  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 12:42:20.859249  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem --> /usr/share/ca-certificates/5050122.pem (1708 bytes)
	I0923 12:42:20.884382  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 12:42:20.907020  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/certs/505012.pem --> /usr/share/ca-certificates/505012.pem (1338 bytes)
	I0923 12:42:20.929183  533789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 12:42:20.945058  533789 ssh_runner.go:195] Run: openssl version
	I0923 12:42:20.950601  533789 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0923 12:42:20.950695  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/505012.pem && ln -fs /usr/share/ca-certificates/505012.pem /etc/ssl/certs/505012.pem"
	I0923 12:42:20.961560  533789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/505012.pem
	I0923 12:42:20.965948  533789 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 12:08 /usr/share/ca-certificates/505012.pem
	I0923 12:42:20.965991  533789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 12:08 /usr/share/ca-certificates/505012.pem
	I0923 12:42:20.966059  533789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/505012.pem
	I0923 12:42:20.971429  533789 command_runner.go:130] > 51391683
	I0923 12:42:20.971538  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/505012.pem /etc/ssl/certs/51391683.0"
	I0923 12:42:20.981632  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5050122.pem && ln -fs /usr/share/ca-certificates/5050122.pem /etc/ssl/certs/5050122.pem"
	I0923 12:42:20.991643  533789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5050122.pem
	I0923 12:42:20.995670  533789 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 12:08 /usr/share/ca-certificates/5050122.pem
	I0923 12:42:20.995699  533789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 12:08 /usr/share/ca-certificates/5050122.pem
	I0923 12:42:20.995737  533789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5050122.pem
	I0923 12:42:21.000794  533789 command_runner.go:130] > 3ec20f2e
	I0923 12:42:21.000858  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5050122.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 12:42:21.011805  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 12:42:21.022099  533789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:42:21.026398  533789 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 11:53 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:42:21.026431  533789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 11:53 /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:42:21.026471  533789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 12:42:21.031633  533789 command_runner.go:130] > b5213941
	I0923 12:42:21.031700  533789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 12:42:21.042241  533789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:42:21.046308  533789 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 12:42:21.046329  533789 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 12:42:21.046335  533789 command_runner.go:130] > Device: 253,1	Inode: 529449      Links: 1
	I0923 12:42:21.046341  533789 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 12:42:21.046347  533789 command_runner.go:130] > Access: 2024-09-23 12:39:27.175020451 +0000
	I0923 12:42:21.046352  533789 command_runner.go:130] > Modify: 2024-09-23 12:35:06.416439194 +0000
	I0923 12:42:21.046357  533789 command_runner.go:130] > Change: 2024-09-23 12:35:06.416439194 +0000
	I0923 12:42:21.046361  533789 command_runner.go:130] >  Birth: 2024-09-23 12:35:06.416439194 +0000
	I0923 12:42:21.046418  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 12:42:21.051809  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.051862  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 12:42:21.057169  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.057233  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 12:42:21.062559  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.062620  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 12:42:21.067757  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.067975  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 12:42:21.073165  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.073222  533789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 12:42:21.078221  533789 command_runner.go:130] > Certificate will not expire
	I0923 12:42:21.078295  533789 kubeadm.go:392] StartCluster: {Name:multinode-915704 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-915704 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.118 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:fa
lse metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:42:21.078442  533789 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 12:42:21.094733  533789 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 12:42:21.104103  533789 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0923 12:42:21.104127  533789 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0923 12:42:21.104133  533789 command_runner.go:130] > /var/lib/minikube/etcd:
	I0923 12:42:21.104136  533789 command_runner.go:130] > member
	I0923 12:42:21.104152  533789 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 12:42:21.104157  533789 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 12:42:21.104199  533789 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 12:42:21.112982  533789 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 12:42:21.113531  533789 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-915704" does not appear in /home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 12:42:21.113680  533789 kubeconfig.go:62] /home/jenkins/minikube-integration/19690-497735/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-915704" cluster setting kubeconfig missing "multinode-915704" context setting]
	I0923 12:42:21.114016  533789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/kubeconfig: {Name:mk0cef7f71c4fa7d96e459b50c6c36de6d1dd40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:42:21.114539  533789 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 12:42:21.114969  533789 kapi.go:59] client config for multinode-915704: &rest.Config{Host:"https://192.168.39.233:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/client.crt", KeyFile:"/home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/client.key", CAFile:"/home/jenkins/minikube-integration/19690-497735/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 12:42:21.115541  533789 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 12:42:21.115868  533789 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 12:42:21.124975  533789 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.233
	I0923 12:42:21.125007  533789 kubeadm.go:1160] stopping kube-system containers ...
	I0923 12:42:21.125072  533789 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 12:42:21.145616  533789 command_runner.go:130] > 879b72b8d259
	I0923 12:42:21.145642  533789 command_runner.go:130] > d8c6fd4c3645
	I0923 12:42:21.145648  533789 command_runner.go:130] > 7fd3389600c2
	I0923 12:42:21.145653  533789 command_runner.go:130] > f514f107aa3b
	I0923 12:42:21.145662  533789 command_runner.go:130] > d38c9e4cde35
	I0923 12:42:21.145667  533789 command_runner.go:130] > e9ab80b3cbfc
	I0923 12:42:21.145672  533789 command_runner.go:130] > de517e94d278
	I0923 12:42:21.145678  533789 command_runner.go:130] > 496b1236003c
	I0923 12:42:21.145685  533789 command_runner.go:130] > 8a5b138b0124
	I0923 12:42:21.145692  533789 command_runner.go:130] > 80c39f229adc
	I0923 12:42:21.145698  533789 command_runner.go:130] > 1b119ee22f96
	I0923 12:42:21.145704  533789 command_runner.go:130] > 2b978cfcf3ae
	I0923 12:42:21.145712  533789 command_runner.go:130] > 3bb7d4eec409
	I0923 12:42:21.145721  533789 command_runner.go:130] > 40e23befdd45
	I0923 12:42:21.145727  533789 command_runner.go:130] > 71016b8c92e5
	I0923 12:42:21.145734  533789 command_runner.go:130] > 3076f80c7c38
	I0923 12:42:21.145740  533789 command_runner.go:130] > 68460215bbe1
	I0923 12:42:21.145749  533789 command_runner.go:130] > d1d519e9923c
	I0923 12:42:21.145757  533789 command_runner.go:130] > 211d988c9d96
	I0923 12:42:21.145767  533789 command_runner.go:130] > ac46f137f49c
	I0923 12:42:21.145773  533789 command_runner.go:130] > 4e46bbfe6817
	I0923 12:42:21.145779  533789 command_runner.go:130] > 7e25ba8cd0a9
	I0923 12:42:21.145788  533789 command_runner.go:130] > 8fd55670f04e
	I0923 12:42:21.145793  533789 command_runner.go:130] > 1cbcfb4b5626
	I0923 12:42:21.145808  533789 command_runner.go:130] > 7525dc942184
	I0923 12:42:21.145817  533789 command_runner.go:130] > 2cdbfd7d1582
	I0923 12:42:21.145823  533789 command_runner.go:130] > 1404684c04c1
	I0923 12:42:21.145829  533789 command_runner.go:130] > e5fb23e4e105
	I0923 12:42:21.145834  533789 command_runner.go:130] > 81597a6b6693
	I0923 12:42:21.145838  533789 command_runner.go:130] > 45b8c1866fc9
	I0923 12:42:21.145844  533789 command_runner.go:130] > 137cd5a0f196
	I0923 12:42:21.145874  533789 docker.go:483] Stopping containers: [879b72b8d259 d8c6fd4c3645 7fd3389600c2 f514f107aa3b d38c9e4cde35 e9ab80b3cbfc de517e94d278 496b1236003c 8a5b138b0124 80c39f229adc 1b119ee22f96 2b978cfcf3ae 3bb7d4eec409 40e23befdd45 71016b8c92e5 3076f80c7c38 68460215bbe1 d1d519e9923c 211d988c9d96 ac46f137f49c 4e46bbfe6817 7e25ba8cd0a9 8fd55670f04e 1cbcfb4b5626 7525dc942184 2cdbfd7d1582 1404684c04c1 e5fb23e4e105 81597a6b6693 45b8c1866fc9 137cd5a0f196]
	I0923 12:42:21.145958  533789 ssh_runner.go:195] Run: docker stop 879b72b8d259 d8c6fd4c3645 7fd3389600c2 f514f107aa3b d38c9e4cde35 e9ab80b3cbfc de517e94d278 496b1236003c 8a5b138b0124 80c39f229adc 1b119ee22f96 2b978cfcf3ae 3bb7d4eec409 40e23befdd45 71016b8c92e5 3076f80c7c38 68460215bbe1 d1d519e9923c 211d988c9d96 ac46f137f49c 4e46bbfe6817 7e25ba8cd0a9 8fd55670f04e 1cbcfb4b5626 7525dc942184 2cdbfd7d1582 1404684c04c1 e5fb23e4e105 81597a6b6693 45b8c1866fc9 137cd5a0f196
	I0923 12:42:21.168377  533789 command_runner.go:130] > 879b72b8d259
	I0923 12:42:21.168407  533789 command_runner.go:130] > d8c6fd4c3645
	I0923 12:42:21.168420  533789 command_runner.go:130] > 7fd3389600c2
	I0923 12:42:21.168425  533789 command_runner.go:130] > f514f107aa3b
	I0923 12:42:21.168430  533789 command_runner.go:130] > d38c9e4cde35
	I0923 12:42:21.168434  533789 command_runner.go:130] > e9ab80b3cbfc
	I0923 12:42:21.168439  533789 command_runner.go:130] > de517e94d278
	I0923 12:42:21.168442  533789 command_runner.go:130] > 496b1236003c
	I0923 12:42:21.168446  533789 command_runner.go:130] > 8a5b138b0124
	I0923 12:42:21.168454  533789 command_runner.go:130] > 80c39f229adc
	I0923 12:42:21.168463  533789 command_runner.go:130] > 1b119ee22f96
	I0923 12:42:21.168471  533789 command_runner.go:130] > 2b978cfcf3ae
	I0923 12:42:21.168478  533789 command_runner.go:130] > 3bb7d4eec409
	I0923 12:42:21.168486  533789 command_runner.go:130] > 40e23befdd45
	I0923 12:42:21.168497  533789 command_runner.go:130] > 71016b8c92e5
	I0923 12:42:21.168504  533789 command_runner.go:130] > 3076f80c7c38
	I0923 12:42:21.168509  533789 command_runner.go:130] > 68460215bbe1
	I0923 12:42:21.168513  533789 command_runner.go:130] > d1d519e9923c
	I0923 12:42:21.168517  533789 command_runner.go:130] > 211d988c9d96
	I0923 12:42:21.168522  533789 command_runner.go:130] > ac46f137f49c
	I0923 12:42:21.168529  533789 command_runner.go:130] > 4e46bbfe6817
	I0923 12:42:21.168533  533789 command_runner.go:130] > 7e25ba8cd0a9
	I0923 12:42:21.168538  533789 command_runner.go:130] > 8fd55670f04e
	I0923 12:42:21.168543  533789 command_runner.go:130] > 1cbcfb4b5626
	I0923 12:42:21.168553  533789 command_runner.go:130] > 7525dc942184
	I0923 12:42:21.168561  533789 command_runner.go:130] > 2cdbfd7d1582
	I0923 12:42:21.168571  533789 command_runner.go:130] > 1404684c04c1
	I0923 12:42:21.168579  533789 command_runner.go:130] > e5fb23e4e105
	I0923 12:42:21.168588  533789 command_runner.go:130] > 81597a6b6693
	I0923 12:42:21.168596  533789 command_runner.go:130] > 45b8c1866fc9
	I0923 12:42:21.168606  533789 command_runner.go:130] > 137cd5a0f196
	I0923 12:42:21.168687  533789 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0923 12:42:21.183915  533789 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 12:42:21.192942  533789 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0923 12:42:21.192975  533789 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0923 12:42:21.192986  533789 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0923 12:42:21.192998  533789 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:42:21.193268  533789 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 12:42:21.193294  533789 kubeadm.go:157] found existing configuration files:
	
	I0923 12:42:21.193341  533789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 12:42:21.201424  533789 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:42:21.201464  533789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 12:42:21.201508  533789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 12:42:21.209737  533789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 12:42:21.217567  533789 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:42:21.217609  533789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 12:42:21.217648  533789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 12:42:21.225920  533789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 12:42:21.233973  533789 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:42:21.234025  533789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 12:42:21.234079  533789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 12:42:21.242621  533789 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 12:42:21.250917  533789 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:42:21.250956  533789 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 12:42:21.250997  533789 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 12:42:21.259635  533789 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 12:42:21.268532  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:21.388694  533789 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 12:42:21.388900  533789 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0923 12:42:21.389024  533789 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0923 12:42:21.389232  533789 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0923 12:42:21.389578  533789 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0923 12:42:21.389782  533789 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0923 12:42:21.390472  533789 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0923 12:42:21.390559  533789 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0923 12:42:21.390725  533789 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0923 12:42:21.390902  533789 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0923 12:42:21.391054  533789 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0923 12:42:21.391399  533789 command_runner.go:130] > [certs] Using the existing "sa" key
	I0923 12:42:21.392746  533789 command_runner.go:130] ! W0923 12:42:21.365889    1375 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:21.392783  533789 command_runner.go:130] ! W0923 12:42:21.366709    1375 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:21.392819  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:21.431061  533789 command_runner.go:130] ! W0923 12:42:21.411249    1380 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:21.431746  533789 command_runner.go:130] ! W0923 12:42:21.412124    1380 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.145915  533789 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 12:42:22.145951  533789 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 12:42:22.145961  533789 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 12:42:22.145970  533789 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 12:42:22.145980  533789 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 12:42:22.145988  533789 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 12:42:22.146028  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:22.185467  533789 command_runner.go:130] ! W0923 12:42:22.165916    1385 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.186157  533789 command_runner.go:130] ! W0923 12:42:22.166715    1385 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.344824  533789 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 12:42:22.344860  533789 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 12:42:22.344868  533789 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0923 12:42:22.344900  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:22.396750  533789 command_runner.go:130] ! W0923 12:42:22.377224    1411 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.397566  533789 command_runner.go:130] ! W0923 12:42:22.378173    1411 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.412204  533789 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 12:42:22.412237  533789 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 12:42:22.412248  533789 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 12:42:22.412263  533789 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 12:42:22.412362  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:22.517883  533789 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 12:42:22.534639  533789 command_runner.go:130] ! W0923 12:42:22.495057    1419 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.534685  533789 command_runner.go:130] ! W0923 12:42:22.495654    1419 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:22.534727  533789 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:42:22.534830  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:42:23.035840  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:42:23.535051  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:42:24.034928  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:42:24.535177  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:42:24.562048  533789 command_runner.go:130] > 1725
	I0923 12:42:24.562121  533789 api_server.go:72] duration metric: took 2.027393055s to wait for apiserver process to appear ...
	I0923 12:42:24.562137  533789 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:42:24.562170  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:24.562781  533789 api_server.go:269] stopped: https://192.168.39.233:8443/healthz: Get "https://192.168.39.233:8443/healthz": dial tcp 192.168.39.233:8443: connect: connection refused
	I0923 12:42:25.062528  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:27.282012  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 12:42:27.282045  533789 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 12:42:27.282060  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:27.364123  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0923 12:42:27.364158  533789 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0923 12:42:27.562477  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:27.572417  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 12:42:27.572444  533789 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 12:42:28.063130  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:28.067571  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0923 12:42:28.067604  533789 api_server.go:103] status: https://192.168.39.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0923 12:42:28.562227  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:42:28.566617  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 200:
	ok
	I0923 12:42:28.566709  533789 round_trippers.go:463] GET https://192.168.39.233:8443/version
	I0923 12:42:28.566721  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:28.566731  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:28.566737  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:28.575825  533789 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 12:42:28.575852  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:28.575863  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:28 GMT
	I0923 12:42:28.575871  533789 round_trippers.go:580]     Audit-Id: f38d41fb-0b76-4626-859f-b3e0af1123c3
	I0923 12:42:28.575876  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:28.575880  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:28.575883  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:28.575887  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:28.575891  533789 round_trippers.go:580]     Content-Length: 263
	I0923 12:42:28.575914  533789 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 12:42:28.576071  533789 api_server.go:141] control plane version: v1.31.1
	I0923 12:42:28.576094  533789 api_server.go:131] duration metric: took 4.013948446s to wait for apiserver health ...
	I0923 12:42:28.576115  533789 cni.go:84] Creating CNI manager for ""
	I0923 12:42:28.576125  533789 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0923 12:42:28.577866  533789 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 12:42:28.579234  533789 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 12:42:28.588805  533789 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0923 12:42:28.588828  533789 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0923 12:42:28.588835  533789 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0923 12:42:28.588841  533789 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 12:42:28.588846  533789 command_runner.go:130] > Access: 2024-09-23 12:42:05.238234495 +0000
	I0923 12:42:28.588852  533789 command_runner.go:130] > Modify: 2024-09-20 04:01:25.000000000 +0000
	I0923 12:42:28.588857  533789 command_runner.go:130] > Change: 2024-09-23 12:42:04.205625858 +0000
	I0923 12:42:28.588860  533789 command_runner.go:130] >  Birth: -
	I0923 12:42:28.589146  533789 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 12:42:28.589165  533789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 12:42:28.623790  533789 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 12:42:28.977419  533789 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0923 12:42:28.998087  533789 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0923 12:42:29.079465  533789 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0923 12:42:29.128902  533789 command_runner.go:130] > daemonset.apps/kindnet configured
	I0923 12:42:29.130730  533789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:42:29.130828  533789 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 12:42:29.130856  533789 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 12:42:29.130939  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:42:29.130948  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.130956  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.130960  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.134526  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:29.134552  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.134563  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.134569  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.134577  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.134581  533789 round_trippers.go:580]     Audit-Id: 7857f266-3c2f-46bc-b251-f72a7a987416
	I0923 12:42:29.134585  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.134589  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.135260  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1151"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90669 chars]
	I0923 12:42:29.140173  533789 system_pods.go:59] 12 kube-system pods found
	I0923 12:42:29.140202  533789 system_pods.go:61] "coredns-7c65d6cfc9-s5jv2" [0dc645c9-049b-41b4-abb9-efb0c3496da5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0923 12:42:29.140210  533789 system_pods.go:61] "etcd-multinode-915704" [298e300f-3a4d-4d3c-803d-d4aa5e369e92] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0923 12:42:29.140216  533789 system_pods.go:61] "kindnet-cddh6" [f28822f1-bc2c-491a-b022-35c17323bab5] Running
	I0923 12:42:29.140224  533789 system_pods.go:61] "kindnet-kt7cw" [130be908-3588-4c06-8595-64df636abc2b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0923 12:42:29.140229  533789 system_pods.go:61] "kindnet-lb8gc" [b3215e24-3c69-4da8-8b5e-db638532efe2] Running
	I0923 12:42:29.140240  533789 system_pods.go:61] "kube-apiserver-multinode-915704" [2c5266db-b2d2-41ac-8bf7-eda1b883d3e3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0923 12:42:29.140255  533789 system_pods.go:61] "kube-controller-manager-multinode-915704" [b95455eb-960c-44bf-9c6d-b39459f4c498] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0923 12:42:29.140262  533789 system_pods.go:61] "kube-proxy-hgdzz" [c9ae5011-0233-4713-83c0-5bbc9829abf9] Running
	I0923 12:42:29.140266  533789 system_pods.go:61] "kube-proxy-jthg2" [5bb0bd6f-f3b5-4750-bf9c-dd24b581e10f] Running
	I0923 12:42:29.140271  533789 system_pods.go:61] "kube-proxy-rmgjt" [d5d86b98-706f-411f-8209-017ecf7d533f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0923 12:42:29.140280  533789 system_pods.go:61] "kube-scheduler-multinode-915704" [6fdd28a4-9d1c-47b1-b14c-212986f47650] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0923 12:42:29.140288  533789 system_pods.go:61] "storage-provisioner" [ec90818c-184f-4066-a5c9-f4875d0b1354] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 12:42:29.140294  533789 system_pods.go:74] duration metric: took 9.545933ms to wait for pod list to return data ...
	I0923 12:42:29.140304  533789 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:42:29.140365  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes
	I0923 12:42:29.140374  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.140381  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.140384  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.143494  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:29.143518  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.143528  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.143534  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.143537  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.143541  533789 round_trippers.go:580]     Audit-Id: 1fdd96ce-c597-45be-9f42-3ea774de53ce
	I0923 12:42:29.143546  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.143549  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.143727  533789 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1151"},"items":[{"metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10145 chars]
	I0923 12:42:29.144443  533789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:42:29.144467  533789 node_conditions.go:123] node cpu capacity is 2
	I0923 12:42:29.144482  533789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:42:29.144492  533789 node_conditions.go:123] node cpu capacity is 2
	I0923 12:42:29.144498  533789 node_conditions.go:105] duration metric: took 4.189809ms to run NodePressure ...
	I0923 12:42:29.144521  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0923 12:42:29.187820  533789 command_runner.go:130] ! W0923 12:42:29.169883    2180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:29.188540  533789 command_runner.go:130] ! W0923 12:42:29.170720    2180 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 12:42:29.501847  533789 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0923 12:42:29.501886  533789 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0923 12:42:29.501921  533789 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0923 12:42:29.502053  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0923 12:42:29.502071  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.502082  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.502087  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.508395  533789 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 12:42:29.508417  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.508424  533789 round_trippers.go:580]     Audit-Id: edb8cc42-b9b2-4ed2-a234-dd66172bc585
	I0923 12:42:29.508429  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.508433  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.508436  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.508438  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.508441  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.508801  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1156"},"items":[{"metadata":{"name":"etcd-multinode-915704","namespace":"kube-system","uid":"298e300f-3a4d-4d3c-803d-d4aa5e369e92","resourceVersion":"1143","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.233:2379","kubernetes.io/config.hash":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.mirror":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.seen":"2024-09-23T12:35:14.769599942Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31285 chars]
	I0923 12:42:29.509913  533789 kubeadm.go:739] kubelet initialised
	I0923 12:42:29.509936  533789 kubeadm.go:740] duration metric: took 8.003121ms waiting for restarted kubelet to initialise ...
	I0923 12:42:29.509947  533789 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:42:29.510028  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:42:29.510039  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.510050  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.510057  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.513734  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:29.513758  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.513769  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.513774  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.513785  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.513791  533789 round_trippers.go:580]     Audit-Id: abf033d5-0214-4fbf-ae69-23f900e14896
	I0923 12:42:29.513795  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.513799  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.515330  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1156"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90076 chars]
	I0923 12:42:29.518838  533789 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.518952  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:29.518963  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.518973  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.518979  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.521631  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:29.521645  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.521652  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.521656  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.521660  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.521663  533789 round_trippers.go:580]     Audit-Id: 6028a39d-10a3-46a8-a268-79ddb5e78a08
	I0923 12:42:29.521666  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.521669  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.522189  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:29.522643  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:29.522658  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.522665  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.522671  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.524603  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:29.524620  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.524629  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.524634  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.524639  533789 round_trippers.go:580]     Audit-Id: 7cfbca6d-829d-4637-9611-c4f81fbdd596
	I0923 12:42:29.524643  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.524646  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.524649  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.524813  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I0923 12:42:29.525206  533789 pod_ready.go:98] node "multinode-915704" hosting pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.525228  533789 pod_ready.go:82] duration metric: took 6.364909ms for pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:29.525237  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.525246  533789 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.525300  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-915704
	I0923 12:42:29.525308  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.525315  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.525318  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.527610  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:29.527623  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.527631  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.527636  533789 round_trippers.go:580]     Audit-Id: f34c6907-7136-476b-adcc-9dd51f0fe40c
	I0923 12:42:29.527643  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.527647  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.527652  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.527657  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.528004  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-915704","namespace":"kube-system","uid":"298e300f-3a4d-4d3c-803d-d4aa5e369e92","resourceVersion":"1143","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.233:2379","kubernetes.io/config.hash":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.mirror":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.seen":"2024-09-23T12:35:14.769599942Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6910 chars]
	I0923 12:42:29.528376  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:29.528389  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.528396  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.528399  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.530284  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:29.530309  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.530317  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.530322  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.530325  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.530330  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.530334  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.530337  533789 round_trippers.go:580]     Audit-Id: 6be7f4ef-e443-4e48-97fc-0258f0b4abc0
	I0923 12:42:29.530466  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I0923 12:42:29.530869  533789 pod_ready.go:98] node "multinode-915704" hosting pod "etcd-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.530894  533789 pod_ready.go:82] duration metric: took 5.640622ms for pod "etcd-multinode-915704" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:29.530908  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "etcd-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.530930  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.530990  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-915704
	I0923 12:42:29.530998  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.531004  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.531008  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.532910  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:29.532922  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.532930  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.532935  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.532940  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.532944  533789 round_trippers.go:580]     Audit-Id: 517e19be-8ed0-450a-8d33-f09a5895cd7b
	I0923 12:42:29.532948  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.532953  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.533117  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-915704","namespace":"kube-system","uid":"2c5266db-b2d2-41ac-8bf7-eda1b883d3e3","resourceVersion":"1141","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.233:8443","kubernetes.io/config.hash":"3115e5dacc8088b6f9144058d3597214","kubernetes.io/config.mirror":"3115e5dacc8088b6f9144058d3597214","kubernetes.io/config.seen":"2024-09-23T12:35:14.769595152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8156 chars]
	I0923 12:42:29.533623  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:29.533642  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.533652  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.533659  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.535470  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:29.535483  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.535491  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.535496  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.535501  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.535504  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.535507  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.535512  533789 round_trippers.go:580]     Audit-Id: bc3c8040-96d8-4206-b5bb-b44df2247faf
	I0923 12:42:29.535636  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I0923 12:42:29.535977  533789 pod_ready.go:98] node "multinode-915704" hosting pod "kube-apiserver-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.536002  533789 pod_ready.go:82] duration metric: took 5.060553ms for pod "kube-apiserver-multinode-915704" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:29.536017  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "kube-apiserver-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.536026  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.536125  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-915704
	I0923 12:42:29.536140  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.536150  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.536161  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.538067  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:29.538081  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.538088  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.538093  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.538099  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.538103  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.538108  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.538131  533789 round_trippers.go:580]     Audit-Id: 6286f72d-605c-4c1f-bbff-417dd4ab934a
	I0923 12:42:29.538241  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-915704","namespace":"kube-system","uid":"b95455eb-960c-44bf-9c6d-b39459f4c498","resourceVersion":"1142","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02fde30fd2ad3cda5e3cacafb6edf88d","kubernetes.io/config.mirror":"02fde30fd2ad3cda5e3cacafb6edf88d","kubernetes.io/config.seen":"2024-09-23T12:35:14.769598186Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0923 12:42:29.538724  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:29.538774  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.538785  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.538801  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.540840  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:29.540854  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.540860  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.540865  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.540867  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.540870  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.540873  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.540877  533789 round_trippers.go:580]     Audit-Id: 367f5179-d2ec-4acd-84c0-cbdb1608f458
	I0923 12:42:29.541020  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I0923 12:42:29.541322  533789 pod_ready.go:98] node "multinode-915704" hosting pod "kube-controller-manager-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.541343  533789 pod_ready.go:82] duration metric: took 5.305107ms for pod "kube-controller-manager-multinode-915704" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:29.541355  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "kube-controller-manager-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:29.541365  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-hgdzz" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.731843  533789 request.go:632] Waited for 190.380157ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgdzz
	I0923 12:42:29.731917  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgdzz
	I0923 12:42:29.731925  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.731934  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.731940  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.734735  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:29.734775  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.734785  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.734792  533789 round_trippers.go:580]     Audit-Id: b670af2b-982f-4ca8-853b-928c422e59a5
	I0923 12:42:29.734795  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.734801  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.734805  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.734809  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.735012  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgdzz","generateName":"kube-proxy-","namespace":"kube-system","uid":"c9ae5011-0233-4713-83c0-5bbc9829abf9","resourceVersion":"991","creationTimestamp":"2024-09-23T12:36:10Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:36:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6207 chars]
	I0923 12:42:29.931876  533789 request.go:632] Waited for 196.408791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m02
	I0923 12:42:29.931987  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m02
	I0923 12:42:29.931995  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:29.932007  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:29.932016  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:29.934531  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:29.934563  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:29.934573  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:29.934581  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:29.934588  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:29.934592  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:29 GMT
	I0923 12:42:29.934597  533789 round_trippers.go:580]     Audit-Id: 927cf82d-fdec-4696-8b85-cf4a9db2589d
	I0923 12:42:29.934601  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:29.934797  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704-m02","uid":"aee80d3c-b81a-428e-9a4a-6e531d5a77ec","resourceVersion":"1015","creationTimestamp":"2024-09-23T12:40:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T12_40_23_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:40:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3814 chars]
	I0923 12:42:29.935137  533789 pod_ready.go:93] pod "kube-proxy-hgdzz" in "kube-system" namespace has status "Ready":"True"
	I0923 12:42:29.935160  533789 pod_ready.go:82] duration metric: took 393.782321ms for pod "kube-proxy-hgdzz" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:29.935174  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jthg2" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:30.131376  533789 request.go:632] Waited for 196.095228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jthg2
	I0923 12:42:30.131455  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jthg2
	I0923 12:42:30.131464  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:30.131475  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:30.131485  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:30.134642  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:30.134667  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:30.134676  533789 round_trippers.go:580]     Audit-Id: 020885f6-0e70-405a-9bbb-735061f7cd86
	I0923 12:42:30.134682  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:30.134686  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:30.134689  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:30.134695  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:30.134700  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:30 GMT
	I0923 12:42:30.135172  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jthg2","generateName":"kube-proxy-","namespace":"kube-system","uid":"5bb0bd6f-f3b5-4750-bf9c-dd24b581e10f","resourceVersion":"1090","creationTimestamp":"2024-09-23T12:37:12Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6203 chars]
	I0923 12:42:30.331096  533789 request.go:632] Waited for 195.293446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m03
	I0923 12:42:30.331175  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m03
	I0923 12:42:30.331181  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:30.331190  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:30.331195  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:30.333523  533789 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0923 12:42:30.333549  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:30.333556  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:30.333561  533789 round_trippers.go:580]     Content-Length: 210
	I0923 12:42:30.333564  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:30 GMT
	I0923 12:42:30.333567  533789 round_trippers.go:580]     Audit-Id: 300597c5-eada-459b-805b-c35430712c2a
	I0923 12:42:30.333571  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:30.333574  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:30.333578  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:30.333605  533789 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-915704-m03\" not found","reason":"NotFound","details":{"name":"multinode-915704-m03","kind":"nodes"},"code":404}
	I0923 12:42:30.333816  533789 pod_ready.go:98] node "multinode-915704-m03" hosting pod "kube-proxy-jthg2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-915704-m03": nodes "multinode-915704-m03" not found
	I0923 12:42:30.333832  533789 pod_ready.go:82] duration metric: took 398.650792ms for pod "kube-proxy-jthg2" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:30.333841  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704-m03" hosting pod "kube-proxy-jthg2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-915704-m03": nodes "multinode-915704-m03" not found
	I0923 12:42:30.333848  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rmgjt" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:30.530978  533789 request.go:632] Waited for 197.040271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rmgjt
	I0923 12:42:30.531100  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rmgjt
	I0923 12:42:30.531109  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:30.531121  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:30.531125  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:30.533973  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:30.533993  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:30.534000  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:30.534004  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:30 GMT
	I0923 12:42:30.534008  533789 round_trippers.go:580]     Audit-Id: 62f677e2-9ccb-486e-a8e6-0fe4db71017c
	I0923 12:42:30.534011  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:30.534014  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:30.534018  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:30.534387  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rmgjt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5d86b98-706f-411f-8209-017ecf7d533f","resourceVersion":"1152","creationTimestamp":"2024-09-23T12:35:19Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0923 12:42:30.731198  533789 request.go:632] Waited for 196.327014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:30.731316  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:30.731324  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:30.731337  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:30.731343  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:30.734011  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:30.734040  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:30.734061  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:30.734068  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:30 GMT
	I0923 12:42:30.734073  533789 round_trippers.go:580]     Audit-Id: 9cf1d51c-dfd7-493e-b12d-d62198008d42
	I0923 12:42:30.734078  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:30.734082  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:30.734087  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:30.734207  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1138","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5285 chars]
	I0923 12:42:30.734626  533789 pod_ready.go:98] node "multinode-915704" hosting pod "kube-proxy-rmgjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:30.734654  533789 pod_ready.go:82] duration metric: took 400.794498ms for pod "kube-proxy-rmgjt" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:30.734666  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "kube-proxy-rmgjt" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:30.734677  533789 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:30.931560  533789 request.go:632] Waited for 196.800785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-915704
	I0923 12:42:30.931663  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-915704
	I0923 12:42:30.931672  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:30.931685  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:30.931694  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:30.951274  533789 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0923 12:42:30.951311  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:30.951320  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:30.951323  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:30.951326  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:30.951330  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:30.951333  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:30 GMT
	I0923 12:42:30.951336  533789 round_trippers.go:580]     Audit-Id: 58b43bc2-e600-40a7-9e27-c48bf1945827
	I0923 12:42:30.953204  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-915704","namespace":"kube-system","uid":"6fdd28a4-9d1c-47b1-b14c-212986f47650","resourceVersion":"1146","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f436c981b3942bad9048e7a5ca8911e5","kubernetes.io/config.mirror":"f436c981b3942bad9048e7a5ca8911e5","kubernetes.io/config.seen":"2024-09-23T12:35:14.769599203Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0923 12:42:31.131685  533789 request.go:632] Waited for 177.839386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:31.131755  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:31.131762  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:31.131774  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:31.131784  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:31.136470  533789 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 12:42:31.136494  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:31.136501  533789 round_trippers.go:580]     Audit-Id: 223a0b1c-d047-4038-bdcd-84f0812fce5d
	I0923 12:42:31.136506  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:31.136509  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:31.136512  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:31.136514  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:31.136517  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:31 GMT
	I0923 12:42:31.137033  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:31.137382  533789 pod_ready.go:98] node "multinode-915704" hosting pod "kube-scheduler-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:31.137406  533789 pod_ready.go:82] duration metric: took 402.720727ms for pod "kube-scheduler-multinode-915704" in "kube-system" namespace to be "Ready" ...
	E0923 12:42:31.137419  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704" hosting pod "kube-scheduler-multinode-915704" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-915704" has status "Ready":"False"
	I0923 12:42:31.137432  533789 pod_ready.go:39] duration metric: took 1.627474764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:42:31.137457  533789 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 12:42:31.149418  533789 command_runner.go:130] > -16
	I0923 12:42:31.149455  533789 ops.go:34] apiserver oom_adj: -16
	I0923 12:42:31.149463  533789 kubeadm.go:597] duration metric: took 10.045299949s to restartPrimaryControlPlane
	I0923 12:42:31.149473  533789 kubeadm.go:394] duration metric: took 10.071191225s to StartCluster
	I0923 12:42:31.149499  533789 settings.go:142] acquiring lock: {Name:mke8a2c3e1b68f8bfc3d2a76cd3ad640f66f3e7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:42:31.149584  533789 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 12:42:31.150356  533789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/kubeconfig: {Name:mk0cef7f71c4fa7d96e459b50c6c36de6d1dd40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:42:31.150592  533789 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.233 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:42:31.150710  533789 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 12:42:31.150851  533789 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:42:31.153013  533789 out.go:177] * Enabled addons: 
	I0923 12:42:31.153017  533789 out.go:177] * Verifying Kubernetes components...
	I0923 12:42:31.154079  533789 addons.go:510] duration metric: took 3.376252ms for enable addons: enabled=[]
	I0923 12:42:31.154161  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:42:31.329069  533789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:42:31.345078  533789 node_ready.go:35] waiting up to 6m0s for node "multinode-915704" to be "Ready" ...
	I0923 12:42:31.345228  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:31.345242  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:31.345252  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:31.345261  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:31.347704  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:31.347728  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:31.347738  533789 round_trippers.go:580]     Audit-Id: 61422360-1a2c-4cab-8779-e131b5dbcd38
	I0923 12:42:31.347744  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:31.347750  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:31.347756  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:31.347765  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:31.347770  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:31 GMT
	I0923 12:42:31.348180  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:31.845956  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:31.845983  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:31.845992  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:31.845997  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:31.848549  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:31.848578  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:31.848589  533789 round_trippers.go:580]     Audit-Id: afd8cdd9-02d3-440c-9a71-bce4976fb9ed
	I0923 12:42:31.848594  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:31.848599  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:31.848605  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:31.848609  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:31.848613  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:31 GMT
	I0923 12:42:31.848801  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:32.345451  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:32.345485  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:32.345495  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:32.345499  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:32.348058  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:32.348091  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:32.348102  533789 round_trippers.go:580]     Audit-Id: d7101898-bf9f-494d-a27c-ae7434f31098
	I0923 12:42:32.348107  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:32.348112  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:32.348116  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:32.348121  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:32.348127  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:32 GMT
	I0923 12:42:32.348297  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:32.846036  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:32.846068  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:32.846079  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:32.846085  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:32.849605  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:32.849627  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:32.849636  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:32.849640  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:32.849643  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:32.849646  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:32 GMT
	I0923 12:42:32.849650  533789 round_trippers.go:580]     Audit-Id: bd7ae99b-1093-4334-846e-9daa5f8db999
	I0923 12:42:32.849655  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:32.850210  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:33.345978  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:33.346010  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:33.346022  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:33.346027  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:33.349914  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:33.349945  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:33.349955  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:33 GMT
	I0923 12:42:33.349961  533789 round_trippers.go:580]     Audit-Id: dfc25ce5-3c33-4dd3-9a80-88a36ce3daea
	I0923 12:42:33.349965  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:33.349971  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:33.349974  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:33.349979  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:33.350193  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:33.350667  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:33.845896  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:33.845921  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:33.845930  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:33.845935  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:33.848565  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:33.848592  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:33.848598  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:33 GMT
	I0923 12:42:33.848602  533789 round_trippers.go:580]     Audit-Id: 032176ac-a145-4b9c-b718-02a6b226c5da
	I0923 12:42:33.848606  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:33.848608  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:33.848611  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:33.848614  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:33.848731  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:34.345338  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:34.345362  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:34.345370  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:34.345375  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:34.348151  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:34.348171  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:34.348178  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:34.348182  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:34.348185  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:34.348190  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:34.348195  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:34 GMT
	I0923 12:42:34.348197  533789 round_trippers.go:580]     Audit-Id: 0fff79e0-85dd-42b9-83e4-4d18c587b97d
	I0923 12:42:34.348323  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:34.846105  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:34.846134  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:34.846144  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:34.846148  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:34.848587  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:34.848608  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:34.848616  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:34.848620  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:34.848623  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:34 GMT
	I0923 12:42:34.848626  533789 round_trippers.go:580]     Audit-Id: 5dbef401-4301-43ff-b85d-298a451539d7
	I0923 12:42:34.848629  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:34.848633  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:34.848815  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:35.345413  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:35.345441  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:35.345451  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:35.345455  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:35.347957  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:35.347979  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:35.347986  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:35 GMT
	I0923 12:42:35.347990  533789 round_trippers.go:580]     Audit-Id: 8e2c9e4f-9144-4942-b77f-f931bdcb1b0c
	I0923 12:42:35.347993  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:35.347996  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:35.347998  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:35.348002  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:35.348190  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:35.845960  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:35.845987  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:35.845997  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:35.846005  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:35.848746  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:35.848773  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:35.848781  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:35 GMT
	I0923 12:42:35.848784  533789 round_trippers.go:580]     Audit-Id: 129b4e27-1230-4dcc-a02d-b30281d601b5
	I0923 12:42:35.848788  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:35.848793  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:35.848798  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:35.848801  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:35.848911  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:35.849332  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:36.345569  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:36.345594  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:36.345603  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:36.345606  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:36.347940  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:36.347963  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:36.347971  533789 round_trippers.go:580]     Audit-Id: d0b5166a-daa4-49f2-bb89-9a6c93c0eb54
	I0923 12:42:36.347976  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:36.347979  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:36.347983  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:36.347988  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:36.347993  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:36 GMT
	I0923 12:42:36.348189  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:36.845990  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:36.846019  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:36.846030  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:36.846036  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:36.848554  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:36.848584  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:36.848595  533789 round_trippers.go:580]     Audit-Id: 880b9977-2751-4a16-a427-95781a5a9d2a
	I0923 12:42:36.848602  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:36.848609  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:36.848614  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:36.848621  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:36.848626  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:36 GMT
	I0923 12:42:36.848803  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:37.345521  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:37.345551  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:37.345564  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:37.345571  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:37.349083  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:37.349109  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:37.349120  533789 round_trippers.go:580]     Audit-Id: bbd402d6-2fbd-4f55-a9bb-9f7ed53bbe83
	I0923 12:42:37.349125  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:37.349130  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:37.349143  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:37.349147  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:37.349151  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:37 GMT
	I0923 12:42:37.349560  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:37.846374  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:37.846404  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:37.846415  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:37.846421  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:37.849117  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:37.849143  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:37.849152  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:37.849159  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:37 GMT
	I0923 12:42:37.849165  533789 round_trippers.go:580]     Audit-Id: 2f65ae00-7414-4ae2-bf59-50f5a527d982
	I0923 12:42:37.849170  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:37.849177  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:37.849183  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:37.849381  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:37.849737  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:38.346104  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:38.346130  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:38.346139  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:38.346143  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:38.348844  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:38.348868  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:38.348877  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:38 GMT
	I0923 12:42:38.348883  533789 round_trippers.go:580]     Audit-Id: 5663a840-a837-4c5c-8f4f-ffe23669a270
	I0923 12:42:38.348887  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:38.348890  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:38.348894  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:38.348899  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:38.351976  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:38.845652  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:38.845681  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:38.845689  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:38.845693  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:38.849190  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:38.849216  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:38.849227  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:38.849234  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:38.849240  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:38.849245  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:38.849250  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:38 GMT
	I0923 12:42:38.849255  533789 round_trippers.go:580]     Audit-Id: 977ebdba-e0b4-4fb3-9f69-497f2b1fcc91
	I0923 12:42:38.849474  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:39.345821  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:39.345845  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:39.345854  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:39.345858  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:39.348246  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:39.348270  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:39.348281  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:39 GMT
	I0923 12:42:39.348286  533789 round_trippers.go:580]     Audit-Id: 973d49e2-0ecd-411a-a3f6-c89eef280549
	I0923 12:42:39.348291  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:39.348295  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:39.348299  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:39.348305  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:39.348460  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:39.846176  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:39.846206  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:39.846217  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:39.846225  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:39.849200  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:39.849221  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:39.849227  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:39.849231  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:39.849235  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:39.849238  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:39.849241  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:39 GMT
	I0923 12:42:39.849245  533789 round_trippers.go:580]     Audit-Id: e4061dfe-96b6-4179-9b34-79c60f601159
	I0923 12:42:39.849691  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:39.850073  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:40.345373  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:40.345403  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:40.345412  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:40.345415  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:40.348018  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:40.348050  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:40.348062  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:40.348068  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:40.348072  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:40 GMT
	I0923 12:42:40.348075  533789 round_trippers.go:580]     Audit-Id: 353ab718-d363-41f9-90c6-05ff368188b7
	I0923 12:42:40.348079  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:40.348081  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:40.348181  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:40.845931  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:40.845962  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:40.845971  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:40.845976  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:40.848307  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:40.848335  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:40.848345  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:40.848359  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:40 GMT
	I0923 12:42:40.848363  533789 round_trippers.go:580]     Audit-Id: 91d2798d-a6ee-485a-ab77-64c4798833b9
	I0923 12:42:40.848366  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:40.848369  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:40.848372  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:40.848526  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:41.345456  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:41.345483  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:41.345493  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:41.345498  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:41.347813  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:41.347835  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:41.347843  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:41.347847  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:41.347853  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:41.347858  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:41.347861  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:41 GMT
	I0923 12:42:41.347865  533789 round_trippers.go:580]     Audit-Id: 2a15f6d0-c4b5-426b-ba51-f3aaa971bf91
	I0923 12:42:41.348060  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:41.845676  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:41.845709  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:41.845721  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:41.845728  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:41.848158  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:41.848180  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:41.848187  533789 round_trippers.go:580]     Audit-Id: 2e1634af-3370-48f7-b768-508399198075
	I0923 12:42:41.848192  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:41.848194  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:41.848198  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:41.848201  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:41.848203  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:41 GMT
	I0923 12:42:41.848404  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:42.346222  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:42.346254  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:42.346268  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:42.346274  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:42.348767  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:42.348789  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:42.348798  533789 round_trippers.go:580]     Audit-Id: 2870230b-9820-4fea-bd38-eeb978a0174a
	I0923 12:42:42.348806  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:42.348811  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:42.348815  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:42.348819  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:42.348823  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:42 GMT
	I0923 12:42:42.348992  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:42.349341  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:42.845709  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:42.845742  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:42.845752  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:42.845758  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:42.848320  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:42.848348  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:42.848358  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:42 GMT
	I0923 12:42:42.848377  533789 round_trippers.go:580]     Audit-Id: aa44d8b9-4083-4d27-b789-9178a74c702e
	I0923 12:42:42.848385  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:42.848389  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:42.848393  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:42.848397  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:42.848546  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:43.346256  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:43.346282  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:43.346291  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:43.346296  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:43.348747  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:43.348766  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:43.348773  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:43.348777  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:43.348780  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:43 GMT
	I0923 12:42:43.348783  533789 round_trippers.go:580]     Audit-Id: 3a5a6c98-a46b-4381-aa31-42b9eec54e0d
	I0923 12:42:43.348786  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:43.348788  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:43.348961  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:43.845658  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:43.845687  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:43.845696  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:43.845700  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:43.848225  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:43.848258  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:43.848266  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:43.848271  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:43.848277  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:43 GMT
	I0923 12:42:43.848281  533789 round_trippers.go:580]     Audit-Id: 4543d16a-ea4f-4307-b16f-883cca358308
	I0923 12:42:43.848285  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:43.848288  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:43.848768  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:44.345656  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:44.345680  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:44.345693  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:44.345704  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:44.348077  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:44.348100  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:44.348108  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:44 GMT
	I0923 12:42:44.348111  533789 round_trippers.go:580]     Audit-Id: a249a65e-8e60-4145-85b5-19a507067926
	I0923 12:42:44.348114  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:44.348117  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:44.348119  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:44.348122  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:44.348354  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:44.846167  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:44.846194  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:44.846203  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:44.846207  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:44.848818  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:44.848840  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:44.848847  533789 round_trippers.go:580]     Audit-Id: ade54a33-09c8-4d3d-8198-e544268bd0c5
	I0923 12:42:44.848852  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:44.848855  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:44.848858  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:44.848860  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:44.848864  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:44 GMT
	I0923 12:42:44.849005  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:44.849340  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:45.345745  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:45.345771  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:45.345780  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:45.345785  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:45.348404  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:45.348428  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:45.348435  533789 round_trippers.go:580]     Audit-Id: 5c81f3bd-ff1e-48ed-ad20-8993029e9d8f
	I0923 12:42:45.348440  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:45.348442  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:45.348447  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:45.348450  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:45.348453  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:45 GMT
	I0923 12:42:45.348562  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:45.846363  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:45.846391  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:45.846401  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:45.846405  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:45.849149  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:45.849177  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:45.849188  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:45.849193  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:45.849199  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:45 GMT
	I0923 12:42:45.849205  533789 round_trippers.go:580]     Audit-Id: e1e28a4e-06f0-4386-98d6-569480ebea3d
	I0923 12:42:45.849212  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:45.849214  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:45.849327  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:46.346017  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:46.346049  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:46.346060  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:46.346063  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:46.348787  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:46.348820  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:46.348832  533789 round_trippers.go:580]     Audit-Id: 28c35328-0dee-4ded-b643-b76f8fd0a1c6
	I0923 12:42:46.348837  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:46.348840  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:46.348842  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:46.348845  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:46.348848  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:46 GMT
	I0923 12:42:46.348946  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:46.845517  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:46.845546  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:46.845555  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:46.845561  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:46.848142  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:46.848167  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:46.848178  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:46.848185  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:46.848191  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:46 GMT
	I0923 12:42:46.848195  533789 round_trippers.go:580]     Audit-Id: a4d3c825-3feb-4389-b42c-c699ed61fb36
	I0923 12:42:46.848199  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:46.848203  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:46.848457  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:47.346242  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:47.346275  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:47.346293  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:47.346300  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:47.349052  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:47.349073  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:47.349081  533789 round_trippers.go:580]     Audit-Id: 1251048c-3dca-4fa5-8065-ecb4ecdc88bd
	I0923 12:42:47.349086  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:47.349088  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:47.349091  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:47.349095  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:47.349097  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:47 GMT
	I0923 12:42:47.349309  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1244","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5501 chars]
	I0923 12:42:47.349793  533789 node_ready.go:53] node "multinode-915704" has status "Ready":"False"
	I0923 12:42:47.846186  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:47.846218  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:47.846228  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:47.846232  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:47.848860  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:47.848883  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:47.848890  533789 round_trippers.go:580]     Audit-Id: 0b2affeb-4962-4868-892e-b53c00db2093
	I0923 12:42:47.848895  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:47.848899  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:47.848904  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:47.848907  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:47.848912  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:47 GMT
	I0923 12:42:47.849148  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:47.849480  533789 node_ready.go:49] node "multinode-915704" has status "Ready":"True"
	I0923 12:42:47.849498  533789 node_ready.go:38] duration metric: took 16.504378231s for node "multinode-915704" to be "Ready" ...
	I0923 12:42:47.849507  533789 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:42:47.849568  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:42:47.849577  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:47.849585  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:47.849588  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:47.852992  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:47.853010  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:47.853020  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:47 GMT
	I0923 12:42:47.853029  533789 round_trippers.go:580]     Audit-Id: 63d0725f-5e8d-4b9c-bdba-9733a14e588d
	I0923 12:42:47.853032  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:47.853035  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:47.853037  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:47.853039  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:47.854125  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1281"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89102 chars]
	I0923 12:42:47.858242  533789 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace to be "Ready" ...
	I0923 12:42:47.858353  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:47.858365  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:47.858376  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:47.858381  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:47.861220  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:47.861245  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:47.861252  533789 round_trippers.go:580]     Audit-Id: dd5fb172-197e-4ac8-8e62-79b13c141167
	I0923 12:42:47.861258  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:47.861263  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:47.861269  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:47.861273  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:47.861277  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:47 GMT
	I0923 12:42:47.861396  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:47.862155  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:47.862180  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:47.862192  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:47.862197  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:47.864451  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:47.864468  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:47.864475  533789 round_trippers.go:580]     Audit-Id: c461204b-de54-49d9-b5da-49d37d79d44c
	I0923 12:42:47.864482  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:47.864485  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:47.864488  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:47.864492  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:47.864495  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:47 GMT
	I0923 12:42:47.864961  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:48.358714  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:48.358744  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:48.358766  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:48.358772  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:48.361398  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:48.361419  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:48.361426  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:48.361429  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:48.361432  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:48 GMT
	I0923 12:42:48.361435  533789 round_trippers.go:580]     Audit-Id: b8e8aa48-a4c0-4eb9-9c89-1f10898574d0
	I0923 12:42:48.361437  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:48.361440  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:48.361740  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:48.362221  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:48.362234  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:48.362241  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:48.362246  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:48.364124  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:48.364140  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:48.364148  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:48.364152  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:48 GMT
	I0923 12:42:48.364156  533789 round_trippers.go:580]     Audit-Id: b1360395-bad1-482d-887e-393422956d9b
	I0923 12:42:48.364160  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:48.364162  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:48.364165  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:48.364328  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:48.858884  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:48.858913  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:48.858937  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:48.858944  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:48.861881  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:48.861906  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:48.861914  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:48.861918  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:48.861921  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:48.861925  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:48 GMT
	I0923 12:42:48.861928  533789 round_trippers.go:580]     Audit-Id: e8abfb05-a660-48a8-838e-22bf30575ab7
	I0923 12:42:48.861931  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:48.862092  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:48.862644  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:48.862659  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:48.862667  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:48.862673  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:48.865540  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:48.865563  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:48.865574  533789 round_trippers.go:580]     Audit-Id: 17d67886-e491-4b30-8143-dbeb1ec0e10d
	I0923 12:42:48.865581  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:48.865587  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:48.865591  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:48.865595  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:48.865599  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:48 GMT
	I0923 12:42:48.865874  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:49.359480  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:49.359507  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:49.359516  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:49.359520  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:49.362475  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:49.362497  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:49.362504  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:49 GMT
	I0923 12:42:49.362507  533789 round_trippers.go:580]     Audit-Id: 5752faca-a972-4738-b597-f68403a3b327
	I0923 12:42:49.362509  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:49.362513  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:49.362515  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:49.362518  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:49.362823  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:49.363315  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:49.363329  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:49.363336  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:49.363340  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:49.365345  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:49.365365  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:49.365372  533789 round_trippers.go:580]     Audit-Id: 2ce213bb-9129-48e1-8966-b1aff6422975
	I0923 12:42:49.365375  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:49.365382  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:49.365387  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:49.365392  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:49.365398  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:49 GMT
	I0923 12:42:49.365666  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:49.859510  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:49.859546  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:49.859558  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:49.859563  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:49.863075  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:49.863126  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:49.863140  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:49 GMT
	I0923 12:42:49.863147  533789 round_trippers.go:580]     Audit-Id: 517b09c4-c448-4d5c-90e0-a05b26a822e4
	I0923 12:42:49.863152  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:49.863157  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:49.863161  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:49.863165  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:49.863286  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:49.863768  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:49.863787  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:49.863794  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:49.863799  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:49.866740  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:49.866797  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:49.866810  533789 round_trippers.go:580]     Audit-Id: a76cd2a9-d702-4c69-a702-d88fe5591c96
	I0923 12:42:49.866815  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:49.866821  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:49.866826  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:49.866832  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:49.866835  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:49 GMT
	I0923 12:42:49.866960  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:49.867450  533789 pod_ready.go:103] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:50.358476  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:50.358509  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:50.358518  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:50.358523  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:50.361453  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:50.361482  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:50.361492  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:50.361497  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:50.361501  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:50.361504  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:50 GMT
	I0923 12:42:50.361510  533789 round_trippers.go:580]     Audit-Id: 5c14503a-adf1-4215-bafc-9c647e623b80
	I0923 12:42:50.361513  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:50.361806  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:50.362336  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:50.362354  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:50.362362  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:50.362367  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:50.364942  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:50.364967  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:50.364975  533789 round_trippers.go:580]     Audit-Id: 0e075bd0-fb3f-4aeb-852e-5dbde425e0c2
	I0923 12:42:50.364978  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:50.364981  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:50.364984  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:50.364988  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:50.364991  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:50 GMT
	I0923 12:42:50.365115  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:50.858674  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:50.858706  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:50.858715  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:50.858719  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:50.861780  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:50.861804  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:50.861811  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:50 GMT
	I0923 12:42:50.861816  533789 round_trippers.go:580]     Audit-Id: 2ee2ff32-2342-4708-b77a-64873d05489a
	I0923 12:42:50.861820  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:50.861825  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:50.861829  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:50.861836  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:50.862003  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:50.862692  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:50.862717  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:50.862729  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:50.862735  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:50.865347  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:50.865424  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:50.865433  533789 round_trippers.go:580]     Audit-Id: 3bdfa622-9826-42a7-bb99-aa0c61e43467
	I0923 12:42:50.865437  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:50.865440  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:50.865443  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:50.865446  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:50.865449  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:50 GMT
	I0923 12:42:50.865569  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1281","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5278 chars]
	I0923 12:42:51.359322  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:51.359350  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:51.359360  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:51.359364  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:51.361898  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:51.361925  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:51.361936  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:51.361944  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:51.361983  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:51.362018  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:51 GMT
	I0923 12:42:51.362033  533789 round_trippers.go:580]     Audit-Id: 4be1a5b4-9a7a-4004-ac57-5a3be86994cf
	I0923 12:42:51.362055  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:51.362193  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:51.362849  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:51.362870  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:51.362880  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:51.362883  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:51.364896  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:51.364916  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:51.364926  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:51.364932  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:51.364940  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:51.364945  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:51 GMT
	I0923 12:42:51.364955  533789 round_trippers.go:580]     Audit-Id: 5c47e8dc-63b9-4436-b249-783590731a56
	I0923 12:42:51.364959  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:51.365116  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:51.858746  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:51.858789  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:51.858800  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:51.858803  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:51.861503  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:51.861529  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:51.861537  533789 round_trippers.go:580]     Audit-Id: e46fb3b0-5a9d-4895-959b-6467edaf130f
	I0923 12:42:51.861544  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:51.861547  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:51.861550  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:51.861554  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:51.861557  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:51 GMT
	I0923 12:42:51.861781  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:51.862451  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:51.862474  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:51.862483  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:51.862492  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:51.864930  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:51.864959  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:51.864970  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:51.864976  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:51.864980  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:51 GMT
	I0923 12:42:51.864985  533789 round_trippers.go:580]     Audit-Id: d39f591c-d033-4db7-a4e1-6853fba76aa3
	I0923 12:42:51.864989  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:51.864993  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:51.865169  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:52.358617  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:52.358648  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:52.358657  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:52.358661  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:52.361539  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:52.361569  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:52.361578  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:52 GMT
	I0923 12:42:52.361584  533789 round_trippers.go:580]     Audit-Id: addb16e9-1ac5-4bd7-b22e-3aed5ef9ac1b
	I0923 12:42:52.361588  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:52.361592  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:52.361596  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:52.361599  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:52.361761  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:52.362270  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:52.362289  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:52.362299  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:52.362303  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:52.364209  533789 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0923 12:42:52.364228  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:52.364237  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:52.364244  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:52.364249  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:52.364253  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:52 GMT
	I0923 12:42:52.364258  533789 round_trippers.go:580]     Audit-Id: 3f58f500-acab-4f0e-963b-3a6c3769c878
	I0923 12:42:52.364264  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:52.364402  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:52.364765  533789 pod_ready.go:103] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:52.859217  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:52.859245  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:52.859254  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:52.859256  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:52.862372  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:52.862402  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:52.862410  533789 round_trippers.go:580]     Audit-Id: b4acbeb1-87c0-4786-8a49-989f8bd85643
	I0923 12:42:52.862412  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:52.862416  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:52.862418  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:52.862422  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:52.862425  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:52 GMT
	I0923 12:42:52.862708  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:52.863277  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:52.863295  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:52.863304  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:52.863309  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:52.865778  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:52.865799  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:52.865809  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:52.865814  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:52.865818  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:52 GMT
	I0923 12:42:52.865822  533789 round_trippers.go:580]     Audit-Id: 400e8366-573f-454a-9d9c-660215870999
	I0923 12:42:52.865829  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:52.865836  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:52.865965  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:53.358680  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:53.358709  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:53.358722  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:53.358728  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:53.361387  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:53.361423  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:53.361432  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:53.361437  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:53.361439  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:53.361444  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:53.361446  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:53 GMT
	I0923 12:42:53.361449  533789 round_trippers.go:580]     Audit-Id: 74e566ce-970d-4416-b2c0-9554a22c100c
	I0923 12:42:53.361561  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:53.362173  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:53.362191  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:53.362202  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:53.362210  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:53.364267  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:53.364299  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:53.364309  533789 round_trippers.go:580]     Audit-Id: e8e7517b-1795-4325-8e38-0f7f220edf3a
	I0923 12:42:53.364313  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:53.364319  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:53.364325  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:53.364329  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:53.364335  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:53 GMT
	I0923 12:42:53.364443  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:53.859031  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:53.859069  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:53.859078  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:53.859083  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:53.862170  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:53.862206  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:53.862217  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:53.862224  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:53.862228  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:53.862233  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:53 GMT
	I0923 12:42:53.862237  533789 round_trippers.go:580]     Audit-Id: a0c32f1b-eac3-4879-b92e-a39a9851b5cb
	I0923 12:42:53.862240  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:53.862394  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:53.862970  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:53.862987  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:53.862994  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:53.862997  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:53.865449  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:53.865474  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:53.865484  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:53 GMT
	I0923 12:42:53.865489  533789 round_trippers.go:580]     Audit-Id: e8056dba-2b30-4c00-9e8a-ec331812b0ef
	I0923 12:42:53.865493  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:53.865497  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:53.865502  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:53.865507  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:53.865644  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:54.358463  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:54.358495  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:54.358505  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:54.358510  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:54.361584  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:54.361615  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:54.361624  533789 round_trippers.go:580]     Audit-Id: 022e04c9-3fda-4e24-a063-7704c07a18c9
	I0923 12:42:54.361631  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:54.361636  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:54.361639  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:54.361643  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:54.361647  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:54 GMT
	I0923 12:42:54.361819  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:54.362318  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:54.362331  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:54.362339  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:54.362344  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:54.364578  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:54.364603  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:54.364613  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:54.364618  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:54.364623  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:54.364627  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:54 GMT
	I0923 12:42:54.364633  533789 round_trippers.go:580]     Audit-Id: f526cc17-b96e-4630-a351-40e2840d1ef0
	I0923 12:42:54.364637  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:54.364732  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:54.365049  533789 pod_ready.go:103] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:54.859580  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:54.859611  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:54.859622  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:54.859626  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:54.862686  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:54.862719  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:54.862731  533789 round_trippers.go:580]     Audit-Id: c4973626-f469-42d3-a47f-b1f380d694ae
	I0923 12:42:54.862739  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:54.862744  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:54.862764  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:54.862771  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:54.862782  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:54 GMT
	I0923 12:42:54.862917  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:54.863539  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:54.863566  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:54.863576  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:54.863581  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:54.866075  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:54.866094  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:54.866102  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:54.866105  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:54.866108  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:54.866114  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:54.866120  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:54 GMT
	I0923 12:42:54.866124  533789 round_trippers.go:580]     Audit-Id: 41e2aa3e-7e14-48b7-997c-e7c889d81c96
	I0923 12:42:54.866292  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:55.358997  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:55.359090  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:55.359114  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:55.359123  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:55.361846  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:55.361870  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:55.361879  533789 round_trippers.go:580]     Audit-Id: 69f2f4b4-a81c-4e33-8510-e79eea3dd95f
	I0923 12:42:55.361883  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:55.361889  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:55.361894  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:55.361898  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:55.361901  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:55 GMT
	I0923 12:42:55.362009  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:55.362559  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:55.362583  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:55.362593  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:55.362599  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:55.364674  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:55.364696  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:55.364706  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:55.364710  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:55 GMT
	I0923 12:42:55.364715  533789 round_trippers.go:580]     Audit-Id: 1c4fc584-b6b6-4215-a0b0-fa9407bdd95a
	I0923 12:42:55.364720  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:55.364724  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:55.364729  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:55.364893  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:55.858584  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:55.858617  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:55.858629  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:55.858635  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:55.861600  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:55.861637  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:55.861647  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:55.861653  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:55.861658  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:55 GMT
	I0923 12:42:55.861662  533789 round_trippers.go:580]     Audit-Id: 1cbd8f7a-a889-4da7-bbbc-ac1308b92d0a
	I0923 12:42:55.861666  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:55.861669  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:55.861772  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:55.862269  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:55.862288  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:55.862298  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:55.862304  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:55.865086  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:55.865126  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:55.865137  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:55.865145  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:55.865149  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:55 GMT
	I0923 12:42:55.865154  533789 round_trippers.go:580]     Audit-Id: c515cbe0-8d9a-4453-92a9-699e71adc035
	I0923 12:42:55.865158  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:55.865162  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:55.865775  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:56.358485  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:56.358514  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:56.358523  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:56.358527  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:56.361025  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:56.361048  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:56.361059  533789 round_trippers.go:580]     Audit-Id: 6b99741b-1d23-4ebb-a13e-a4281bf08d1d
	I0923 12:42:56.361064  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:56.361069  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:56.361072  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:56.361078  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:56.361082  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:56 GMT
	I0923 12:42:56.361197  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:56.361909  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:56.361932  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:56.361943  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:56.361953  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:56.364012  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:56.364033  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:56.364042  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:56.364046  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:56 GMT
	I0923 12:42:56.364050  533789 round_trippers.go:580]     Audit-Id: 8d420305-f73a-456b-886e-9a77d0330977
	I0923 12:42:56.364053  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:56.364057  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:56.364060  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:56.364178  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:56.858914  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:56.858960  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:56.858970  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:56.858976  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:56.862645  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:56.862686  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:56.862694  533789 round_trippers.go:580]     Audit-Id: f327fd06-9db5-4b18-ad63-9413adf2d158
	I0923 12:42:56.862698  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:56.862703  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:56.862709  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:56.862714  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:56.862718  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:56 GMT
	I0923 12:42:56.862864  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:56.863635  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:56.863661  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:56.863673  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:56.863678  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:56.866843  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:56.866867  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:56.866874  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:56.866878  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:56.866880  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:56.866883  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:56.866885  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:56 GMT
	I0923 12:42:56.866889  533789 round_trippers.go:580]     Audit-Id: dddc2e2f-6380-4204-a00c-0250b1fa912e
	I0923 12:42:56.867030  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:56.867391  533789 pod_ready.go:103] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:57.358587  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:57.358620  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:57.358633  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:57.358638  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:57.361979  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:57.362004  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:57.362011  533789 round_trippers.go:580]     Audit-Id: 5e1dc853-5556-4205-8029-c6b854ff1c95
	I0923 12:42:57.362017  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:57.362025  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:57.362029  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:57.362035  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:57.362039  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:57 GMT
	I0923 12:42:57.362151  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:57.362656  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:57.362672  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:57.362679  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:57.362682  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:57.365086  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:57.365106  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:57.365112  533789 round_trippers.go:580]     Audit-Id: 17f2bb0c-e1a2-4df0-8489-8b696d95edc4
	I0923 12:42:57.365116  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:57.365119  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:57.365121  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:57.365132  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:57.365135  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:57 GMT
	I0923 12:42:57.365292  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:57.859016  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:57.859060  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:57.859077  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:57.859083  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:57.862178  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:57.862211  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:57.862224  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:57 GMT
	I0923 12:42:57.862231  533789 round_trippers.go:580]     Audit-Id: 73e92740-4197-438f-9eea-e8718ee41904
	I0923 12:42:57.862237  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:57.862241  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:57.862245  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:57.862249  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:57.862470  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:57.863262  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:57.863285  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:57.863297  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:57.863303  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:57.865934  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:57.865978  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:57.865988  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:57.865993  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:57.865999  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:57.866004  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:57.866008  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:57 GMT
	I0923 12:42:57.866012  533789 round_trippers.go:580]     Audit-Id: 9e7996ab-cf08-42fa-ba16-243b61fbca59
	I0923 12:42:57.866252  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:58.358963  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:58.358990  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:58.359000  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:58.359004  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:58.361393  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:58.361415  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:58.361422  533789 round_trippers.go:580]     Audit-Id: 972c9dcc-9096-461a-b43b-453c7f52268e
	I0923 12:42:58.361426  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:58.361429  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:58.361435  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:58.361439  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:58.361443  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:58 GMT
	I0923 12:42:58.361589  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:58.362067  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:58.362081  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:58.362089  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:58.362092  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:58.364133  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:58.364154  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:58.364160  533789 round_trippers.go:580]     Audit-Id: 083923c7-8de0-474a-a235-5bf9ee25e823
	I0923 12:42:58.364165  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:58.364169  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:58.364173  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:58.364177  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:58.364183  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:58 GMT
	I0923 12:42:58.364312  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:58.859330  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:58.859366  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:58.859378  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:58.859385  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:58.862391  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:58.862419  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:58.862426  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:58.862430  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:58.862432  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:58.862438  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:58 GMT
	I0923 12:42:58.862440  533789 round_trippers.go:580]     Audit-Id: a252f1f3-c250-43fe-85d3-612fd6c2aec4
	I0923 12:42:58.862443  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:58.862536  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:58.863073  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:58.863090  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:58.863098  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:58.863102  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:58.865634  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:58.865660  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:58.865668  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:58 GMT
	I0923 12:42:58.865672  533789 round_trippers.go:580]     Audit-Id: b2e12cc4-4fd8-437c-aec8-1b0ee354575d
	I0923 12:42:58.865676  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:58.865679  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:58.865683  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:58.865686  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:58.865792  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:59.359211  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:59.359236  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:59.359247  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:59.359251  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:59.361954  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:59.361985  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:59.361996  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:59.362002  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:59 GMT
	I0923 12:42:59.362007  533789 round_trippers.go:580]     Audit-Id: bbfcfd8d-1716-4d1a-a432-4863be3c448b
	I0923 12:42:59.362011  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:59.362015  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:59.362019  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:59.362136  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:59.362843  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:59.362866  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:59.362876  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:59.362881  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:59.365025  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:59.365052  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:59.365059  533789 round_trippers.go:580]     Audit-Id: ce99cd9e-210c-4384-8f43-b5684bcc90ae
	I0923 12:42:59.365062  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:59.365064  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:59.365067  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:59.365070  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:59.365073  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:59 GMT
	I0923 12:42:59.365237  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:42:59.365611  533789 pod_ready.go:103] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:42:59.858962  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:42:59.859016  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:59.859028  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:59.859034  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:59.862797  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:42:59.862828  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:59.862836  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:59.862840  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:59.862843  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:59.862846  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:59.862849  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:59 GMT
	I0923 12:42:59.862852  533789 round_trippers.go:580]     Audit-Id: b8bc7add-760d-4b30-8ca8-0d36c8b8d6c6
	I0923 12:42:59.863141  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:42:59.863962  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:42:59.863988  533789 round_trippers.go:469] Request Headers:
	I0923 12:42:59.864000  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:42:59.864006  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:42:59.866303  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:42:59.866324  533789 round_trippers.go:577] Response Headers:
	I0923 12:42:59.866332  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:42:59 GMT
	I0923 12:42:59.866335  533789 round_trippers.go:580]     Audit-Id: b769469c-1b61-438a-bbaf-3036263b4060
	I0923 12:42:59.866338  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:42:59.866340  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:42:59.866342  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:42:59.866345  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:42:59.866704  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.359516  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:43:00.359548  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.359557  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.359563  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.362918  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:00.362955  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.362967  533789 round_trippers.go:580]     Audit-Id: e465022c-6447-4238-80ba-dd70eccea18a
	I0923 12:43:00.362972  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.362977  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.362981  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.362984  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.362994  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.363166  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1145","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0923 12:43:00.363753  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:00.363773  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.363781  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.363787  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.365844  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.365864  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.365873  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.365880  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.365883  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.365887  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.365891  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.365895  533789 round_trippers.go:580]     Audit-Id: 889d2b43-c303-417c-9014-3ad2267a8a46
	I0923 12:43:00.366167  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.858860  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-s5jv2
	I0923 12:43:00.858896  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.858908  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.858914  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.873328  533789 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0923 12:43:00.873358  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.873367  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.873370  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.873374  533789 round_trippers.go:580]     Audit-Id: a94ccf8a-fc72-4665-9f8a-ff6df4f4c0c8
	I0923 12:43:00.873377  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.873380  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.873382  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.873623  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1308","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I0923 12:43:00.874328  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:00.874355  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.874366  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.874372  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.877696  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:00.877718  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.877726  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.877730  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.877735  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.877738  533789 round_trippers.go:580]     Audit-Id: 0e0d5eb5-d612-4e98-aa6c-4576a4fb2b5b
	I0923 12:43:00.877742  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.877745  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.878069  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.878399  533789 pod_ready.go:93] pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:00.878418  533789 pod_ready.go:82] duration metric: took 13.020140719s for pod "coredns-7c65d6cfc9-s5jv2" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.878429  533789 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.878492  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-915704
	I0923 12:43:00.878501  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.878509  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.878515  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.881427  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.881448  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.881456  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.881460  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.881462  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.881465  533789 round_trippers.go:580]     Audit-Id: 5bc11e2e-3efb-4d69-94b0-a566431b0793
	I0923 12:43:00.881467  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.881471  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.881889  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-915704","namespace":"kube-system","uid":"298e300f-3a4d-4d3c-803d-d4aa5e369e92","resourceVersion":"1271","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.233:2379","kubernetes.io/config.hash":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.mirror":"c180aea16ed15616d553b5002a6e5b74","kubernetes.io/config.seen":"2024-09-23T12:35:14.769599942Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6686 chars]
	I0923 12:43:00.882356  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:00.882372  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.882379  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.882383  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.884657  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.884675  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.884682  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.884686  533789 round_trippers.go:580]     Audit-Id: f652c027-c4d2-4fa8-b2fc-9ccbef3aff69
	I0923 12:43:00.884689  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.884693  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.884696  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.884700  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.884952  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.885312  533789 pod_ready.go:93] pod "etcd-multinode-915704" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:00.885335  533789 pod_ready.go:82] duration metric: took 6.90041ms for pod "etcd-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.885353  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.885415  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-915704
	I0923 12:43:00.885425  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.885432  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.885436  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.887961  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.887978  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.887985  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.887990  533789 round_trippers.go:580]     Audit-Id: be9a05f8-cbea-42de-b71e-6d4baa7fdd17
	I0923 12:43:00.887994  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.887997  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.888001  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.888004  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.888567  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-915704","namespace":"kube-system","uid":"2c5266db-b2d2-41ac-8bf7-eda1b883d3e3","resourceVersion":"1275","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.233:8443","kubernetes.io/config.hash":"3115e5dacc8088b6f9144058d3597214","kubernetes.io/config.mirror":"3115e5dacc8088b6f9144058d3597214","kubernetes.io/config.seen":"2024-09-23T12:35:14.769595152Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7912 chars]
	I0923 12:43:00.889027  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:00.889044  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.889052  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.889056  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.892045  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.892069  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.892078  533789 round_trippers.go:580]     Audit-Id: 55add694-b8a5-4731-9e36-2398ab87935f
	I0923 12:43:00.892085  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.892090  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.892093  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.892097  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.892104  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.892430  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.892747  533789 pod_ready.go:93] pod "kube-apiserver-multinode-915704" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:00.892765  533789 pod_ready.go:82] duration metric: took 7.405884ms for pod "kube-apiserver-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.892775  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.892843  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-915704
	I0923 12:43:00.892852  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.892858  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.892862  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.895225  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.895250  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.895259  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.895265  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.895270  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.895276  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.895280  533789 round_trippers.go:580]     Audit-Id: 124dc789-9ae6-4be3-a42d-5f2086fc8ab1
	I0923 12:43:00.895284  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.895700  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-915704","namespace":"kube-system","uid":"b95455eb-960c-44bf-9c6d-b39459f4c498","resourceVersion":"1269","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02fde30fd2ad3cda5e3cacafb6edf88d","kubernetes.io/config.mirror":"02fde30fd2ad3cda5e3cacafb6edf88d","kubernetes.io/config.seen":"2024-09-23T12:35:14.769598186Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0923 12:43:00.896244  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:00.896261  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.896268  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.896273  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.898509  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.898526  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.898533  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.898537  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.898540  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.898544  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.898547  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.898549  533789 round_trippers.go:580]     Audit-Id: ad60dbc4-2b77-4850-8df6-aa97825a3417
	I0923 12:43:00.898731  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:00.899139  533789 pod_ready.go:93] pod "kube-controller-manager-multinode-915704" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:00.899160  533789 pod_ready.go:82] duration metric: took 6.379243ms for pod "kube-controller-manager-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.899174  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hgdzz" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.899237  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgdzz
	I0923 12:43:00.899246  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.899253  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.899258  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.901391  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.901408  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.901416  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.901422  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.901428  533789 round_trippers.go:580]     Audit-Id: d920c162-66b0-4c39-a075-7f775388b87f
	I0923 12:43:00.901432  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.901436  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.901441  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.901613  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgdzz","generateName":"kube-proxy-","namespace":"kube-system","uid":"c9ae5011-0233-4713-83c0-5bbc9829abf9","resourceVersion":"991","creationTimestamp":"2024-09-23T12:36:10Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:36:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6207 chars]
	I0923 12:43:00.902132  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m02
	I0923 12:43:00.902149  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:00.902156  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:00.902163  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:00.904984  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:00.904999  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:00.905005  533789 round_trippers.go:580]     Audit-Id: 327e94df-3ddd-46d0-b387-f7ebf57e13a1
	I0923 12:43:00.905010  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:00.905013  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:00.905016  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:00.905018  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:00.905021  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:00 GMT
	I0923 12:43:00.905353  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704-m02","uid":"aee80d3c-b81a-428e-9a4a-6e531d5a77ec","resourceVersion":"1015","creationTimestamp":"2024-09-23T12:40:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_23T12_40_23_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:40:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3814 chars]
	I0923 12:43:00.905612  533789 pod_ready.go:93] pod "kube-proxy-hgdzz" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:00.905627  533789 pod_ready.go:82] duration metric: took 6.447485ms for pod "kube-proxy-hgdzz" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:00.905637  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jthg2" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:01.059017  533789 request.go:632] Waited for 153.306667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jthg2
	I0923 12:43:01.059121  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jthg2
	I0923 12:43:01.059127  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:01.059135  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:01.059147  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:01.062452  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:01.062495  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:01.062506  533789 round_trippers.go:580]     Audit-Id: cfa02e85-8ed1-486f-8cb7-fdb1eed0a0a5
	I0923 12:43:01.062513  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:01.062516  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:01.062520  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:01.062523  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:01.062527  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:01 GMT
	I0923 12:43:01.062656  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jthg2","generateName":"kube-proxy-","namespace":"kube-system","uid":"5bb0bd6f-f3b5-4750-bf9c-dd24b581e10f","resourceVersion":"1090","creationTimestamp":"2024-09-23T12:37:12Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:37:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6203 chars]
	I0923 12:43:01.258879  533789 request.go:632] Waited for 195.71768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m03
	I0923 12:43:01.258958  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704-m03
	I0923 12:43:01.258965  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:01.258975  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:01.258991  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:01.262100  533789 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0923 12:43:01.262137  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:01.262150  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:01.262157  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:01.262165  533789 round_trippers.go:580]     Content-Length: 210
	I0923 12:43:01.262170  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:01 GMT
	I0923 12:43:01.262174  533789 round_trippers.go:580]     Audit-Id: 13d10adf-c887-40d0-bfbb-8ffd80c71fed
	I0923 12:43:01.262181  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:01.262187  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:01.262215  533789 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-915704-m03\" not found","reason":"NotFound","details":{"name":"multinode-915704-m03","kind":"nodes"},"code":404}
	I0923 12:43:01.262359  533789 pod_ready.go:98] node "multinode-915704-m03" hosting pod "kube-proxy-jthg2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-915704-m03": nodes "multinode-915704-m03" not found
	I0923 12:43:01.262380  533789 pod_ready.go:82] duration metric: took 356.736632ms for pod "kube-proxy-jthg2" in "kube-system" namespace to be "Ready" ...
	E0923 12:43:01.262389  533789 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-915704-m03" hosting pod "kube-proxy-jthg2" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-915704-m03": nodes "multinode-915704-m03" not found
	I0923 12:43:01.262396  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rmgjt" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:01.459785  533789 request.go:632] Waited for 197.303091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rmgjt
	I0923 12:43:01.459883  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rmgjt
	I0923 12:43:01.459890  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:01.459902  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:01.459909  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:01.462690  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:01.462721  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:01.462732  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:01.462737  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:01.462742  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:01.462747  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:01 GMT
	I0923 12:43:01.462774  533789 round_trippers.go:580]     Audit-Id: 04254f78-ae7e-4e6f-a12a-3b25d5037f2e
	I0923 12:43:01.462779  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:01.462980  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rmgjt","generateName":"kube-proxy-","namespace":"kube-system","uid":"d5d86b98-706f-411f-8209-017ecf7d533f","resourceVersion":"1251","creationTimestamp":"2024-09-23T12:35:19Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"10f79035-8a94-491f-a5a1-907b1d0c3ef9","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"10f79035-8a94-491f-a5a1-907b1d0c3ef9\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0923 12:43:01.658954  533789 request.go:632] Waited for 195.37659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:01.659065  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:01.659074  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:01.659085  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:01.659092  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:01.661815  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:01.661846  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:01.661859  533789 round_trippers.go:580]     Audit-Id: d98ce006-f5cb-48f5-a2d0-94f487bd5498
	I0923 12:43:01.661863  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:01.661867  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:01.661874  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:01.661878  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:01.661883  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:01 GMT
	I0923 12:43:01.662020  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:01.662497  533789 pod_ready.go:93] pod "kube-proxy-rmgjt" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:01.662530  533789 pod_ready.go:82] duration metric: took 400.123073ms for pod "kube-proxy-rmgjt" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:01.662545  533789 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:01.859419  533789 request.go:632] Waited for 196.788931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-915704
	I0923 12:43:01.859506  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-915704
	I0923 12:43:01.859511  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:01.859533  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:01.859539  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:01.862495  533789 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0923 12:43:01.862526  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:01.862536  533789 round_trippers.go:580]     Audit-Id: 0dfd5ad2-184d-43f7-ac45-843e99bf6992
	I0923 12:43:01.862541  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:01.862546  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:01.862550  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:01.862554  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:01.862557  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:01 GMT
	I0923 12:43:01.862681  533789 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-915704","namespace":"kube-system","uid":"6fdd28a4-9d1c-47b1-b14c-212986f47650","resourceVersion":"1260","creationTimestamp":"2024-09-23T12:35:14Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f436c981b3942bad9048e7a5ca8911e5","kubernetes.io/config.mirror":"f436c981b3942bad9048e7a5ca8911e5","kubernetes.io/config.seen":"2024-09-23T12:35:14.769599203Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0923 12:43:02.059718  533789 request.go:632] Waited for 196.452098ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:02.059800  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes/multinode-915704
	I0923 12:43:02.059805  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.059813  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.059817  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.062871  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:02.062954  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.062974  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.062980  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.062986  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.062992  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.062996  533789 round_trippers.go:580]     Audit-Id: 1a6d20e8-c9ae-43b1-a39d-251c3c3dff5e
	I0923 12:43:02.063001  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.063141  533789 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-09-23T12:35:12Z","fieldsType":"FieldsV1","f [truncated 5158 chars]
	I0923 12:43:02.063583  533789 pod_ready.go:93] pod "kube-scheduler-multinode-915704" in "kube-system" namespace has status "Ready":"True"
	I0923 12:43:02.063606  533789 pod_ready.go:82] duration metric: took 401.047928ms for pod "kube-scheduler-multinode-915704" in "kube-system" namespace to be "Ready" ...
	I0923 12:43:02.063622  533789 pod_ready.go:39] duration metric: took 14.214105378s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:43:02.063648  533789 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:43:02.063718  533789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:43:02.079349  533789 command_runner.go:130] > 1725
	I0923 12:43:02.079423  533789 api_server.go:72] duration metric: took 30.928798484s to wait for apiserver process to appear ...
	I0923 12:43:02.079435  533789 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:43:02.079476  533789 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:43:02.085253  533789 api_server.go:279] https://192.168.39.233:8443/healthz returned 200:
	ok
	I0923 12:43:02.085331  533789 round_trippers.go:463] GET https://192.168.39.233:8443/version
	I0923 12:43:02.085340  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.085350  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.085358  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.086325  533789 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0923 12:43:02.086345  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.086352  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.086358  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.086361  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.086364  533789 round_trippers.go:580]     Content-Length: 263
	I0923 12:43:02.086367  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.086369  533789 round_trippers.go:580]     Audit-Id: 38a072eb-eeae-4986-bcaa-cec4b4bd504b
	I0923 12:43:02.086371  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.086388  533789 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 12:43:02.086430  533789 api_server.go:141] control plane version: v1.31.1
	I0923 12:43:02.086446  533789 api_server.go:131] duration metric: took 7.005774ms to wait for apiserver health ...
	I0923 12:43:02.086455  533789 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 12:43:02.258840  533789 request.go:632] Waited for 172.302832ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:43:02.258932  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:43:02.258938  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.258946  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.258953  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.262930  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:02.262966  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.262979  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.262987  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.262993  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.262999  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.263005  533789 round_trippers.go:580]     Audit-Id: 3c239a8e-3da8-4b94-9606-dce4c9ca8924
	I0923 12:43:02.263011  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.263873  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1312"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1308","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89462 chars]
	I0923 12:43:02.266547  533789 system_pods.go:59] 12 kube-system pods found
	I0923 12:43:02.266580  533789 system_pods.go:61] "coredns-7c65d6cfc9-s5jv2" [0dc645c9-049b-41b4-abb9-efb0c3496da5] Running
	I0923 12:43:02.266586  533789 system_pods.go:61] "etcd-multinode-915704" [298e300f-3a4d-4d3c-803d-d4aa5e369e92] Running
	I0923 12:43:02.266589  533789 system_pods.go:61] "kindnet-cddh6" [f28822f1-bc2c-491a-b022-35c17323bab5] Running
	I0923 12:43:02.266593  533789 system_pods.go:61] "kindnet-kt7cw" [130be908-3588-4c06-8595-64df636abc2b] Running
	I0923 12:43:02.266596  533789 system_pods.go:61] "kindnet-lb8gc" [b3215e24-3c69-4da8-8b5e-db638532efe2] Running
	I0923 12:43:02.266600  533789 system_pods.go:61] "kube-apiserver-multinode-915704" [2c5266db-b2d2-41ac-8bf7-eda1b883d3e3] Running
	I0923 12:43:02.266606  533789 system_pods.go:61] "kube-controller-manager-multinode-915704" [b95455eb-960c-44bf-9c6d-b39459f4c498] Running
	I0923 12:43:02.266609  533789 system_pods.go:61] "kube-proxy-hgdzz" [c9ae5011-0233-4713-83c0-5bbc9829abf9] Running
	I0923 12:43:02.266612  533789 system_pods.go:61] "kube-proxy-jthg2" [5bb0bd6f-f3b5-4750-bf9c-dd24b581e10f] Running
	I0923 12:43:02.266615  533789 system_pods.go:61] "kube-proxy-rmgjt" [d5d86b98-706f-411f-8209-017ecf7d533f] Running
	I0923 12:43:02.266618  533789 system_pods.go:61] "kube-scheduler-multinode-915704" [6fdd28a4-9d1c-47b1-b14c-212986f47650] Running
	I0923 12:43:02.266623  533789 system_pods.go:61] "storage-provisioner" [ec90818c-184f-4066-a5c9-f4875d0b1354] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 12:43:02.266631  533789 system_pods.go:74] duration metric: took 180.169944ms to wait for pod list to return data ...
	I0923 12:43:02.266640  533789 default_sa.go:34] waiting for default service account to be created ...
	I0923 12:43:02.459043  533789 request.go:632] Waited for 192.30567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:43:02.459113  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/default/serviceaccounts
	I0923 12:43:02.459119  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.459129  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.459166  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.462557  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:02.462586  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.462596  533789 round_trippers.go:580]     Content-Length: 262
	I0923 12:43:02.462602  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.462607  533789 round_trippers.go:580]     Audit-Id: 17fd5512-b873-42f9-93c9-5baef6ed25f6
	I0923 12:43:02.462613  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.462619  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.462622  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.462627  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.462651  533789 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1312"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f10533a2-fd69-47ec-aa30-b82aff79df10","resourceVersion":"296","creationTimestamp":"2024-09-23T12:35:19Z"}}]}
	I0923 12:43:02.462894  533789 default_sa.go:45] found service account: "default"
	I0923 12:43:02.462919  533789 default_sa.go:55] duration metric: took 196.272508ms for default service account to be created ...
	I0923 12:43:02.462928  533789 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 12:43:02.659535  533789 request.go:632] Waited for 196.426938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:43:02.659610  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/namespaces/kube-system/pods
	I0923 12:43:02.659618  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.659630  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.659635  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.662850  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:02.662888  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.662899  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.662905  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.662910  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.662913  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.662918  533789 round_trippers.go:580]     Audit-Id: da518312-25b8-4285-b3ff-52f806f3db30
	I0923 12:43:02.662922  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.663712  533789 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1312"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-s5jv2","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0dc645c9-049b-41b4-abb9-efb0c3496da5","resourceVersion":"1308","creationTimestamp":"2024-09-23T12:35:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"3bb070c6-417f-4b45-a2ff-737fe5f977e1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T12:35:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3bb070c6-417f-4b45-a2ff-737fe5f977e1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89462 chars]
	I0923 12:43:02.666478  533789 system_pods.go:86] 12 kube-system pods found
	I0923 12:43:02.666509  533789 system_pods.go:89] "coredns-7c65d6cfc9-s5jv2" [0dc645c9-049b-41b4-abb9-efb0c3496da5] Running
	I0923 12:43:02.666516  533789 system_pods.go:89] "etcd-multinode-915704" [298e300f-3a4d-4d3c-803d-d4aa5e369e92] Running
	I0923 12:43:02.666526  533789 system_pods.go:89] "kindnet-cddh6" [f28822f1-bc2c-491a-b022-35c17323bab5] Running
	I0923 12:43:02.666532  533789 system_pods.go:89] "kindnet-kt7cw" [130be908-3588-4c06-8595-64df636abc2b] Running
	I0923 12:43:02.666541  533789 system_pods.go:89] "kindnet-lb8gc" [b3215e24-3c69-4da8-8b5e-db638532efe2] Running
	I0923 12:43:02.666546  533789 system_pods.go:89] "kube-apiserver-multinode-915704" [2c5266db-b2d2-41ac-8bf7-eda1b883d3e3] Running
	I0923 12:43:02.666552  533789 system_pods.go:89] "kube-controller-manager-multinode-915704" [b95455eb-960c-44bf-9c6d-b39459f4c498] Running
	I0923 12:43:02.666561  533789 system_pods.go:89] "kube-proxy-hgdzz" [c9ae5011-0233-4713-83c0-5bbc9829abf9] Running
	I0923 12:43:02.666567  533789 system_pods.go:89] "kube-proxy-jthg2" [5bb0bd6f-f3b5-4750-bf9c-dd24b581e10f] Running
	I0923 12:43:02.666575  533789 system_pods.go:89] "kube-proxy-rmgjt" [d5d86b98-706f-411f-8209-017ecf7d533f] Running
	I0923 12:43:02.666580  533789 system_pods.go:89] "kube-scheduler-multinode-915704" [6fdd28a4-9d1c-47b1-b14c-212986f47650] Running
	I0923 12:43:02.666591  533789 system_pods.go:89] "storage-provisioner" [ec90818c-184f-4066-a5c9-f4875d0b1354] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 12:43:02.666600  533789 system_pods.go:126] duration metric: took 203.665385ms to wait for k8s-apps to be running ...
	I0923 12:43:02.666610  533789 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 12:43:02.666671  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:43:02.682336  533789 system_svc.go:56] duration metric: took 15.712245ms WaitForService to wait for kubelet
	I0923 12:43:02.682370  533789 kubeadm.go:582] duration metric: took 31.531745772s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 12:43:02.682390  533789 node_conditions.go:102] verifying NodePressure condition ...
	I0923 12:43:02.859862  533789 request.go:632] Waited for 177.370424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.233:8443/api/v1/nodes
	I0923 12:43:02.859924  533789 round_trippers.go:463] GET https://192.168.39.233:8443/api/v1/nodes
	I0923 12:43:02.859929  533789 round_trippers.go:469] Request Headers:
	I0923 12:43:02.859936  533789 round_trippers.go:473]     Accept: application/json, */*
	I0923 12:43:02.859940  533789 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0923 12:43:02.863548  533789 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 12:43:02.863582  533789 round_trippers.go:577] Response Headers:
	I0923 12:43:02.863590  533789 round_trippers.go:580]     Date: Mon, 23 Sep 2024 12:43:02 GMT
	I0923 12:43:02.863595  533789 round_trippers.go:580]     Audit-Id: e9ccaab6-4e60-4c06-a198-10de08dcf1ee
	I0923 12:43:02.863599  533789 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 12:43:02.863603  533789 round_trippers.go:580]     Content-Type: application/json
	I0923 12:43:02.863606  533789 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6f345033-8ddf-4917-9b2b-2e372f772fd7
	I0923 12:43:02.863610  533789 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 4f7f44c4-4207-4bda-8ce3-3f5e00a1f57a
	I0923 12:43:02.863759  533789 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1312"},"items":[{"metadata":{"name":"multinode-915704","uid":"4da14d98-d928-483b-bd64-744c9dd33d89","resourceVersion":"1285","creationTimestamp":"2024-09-23T12:35:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-915704","kubernetes.io/os":"linux","minikube.k8s.io/commit":"30f673d6edb6d12f8aba2f7e30667ea1b6d205d1","minikube.k8s.io/name":"multinode-915704","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T12_35_15_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10018 chars]
	I0923 12:43:02.864330  533789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:43:02.864363  533789 node_conditions.go:123] node cpu capacity is 2
	I0923 12:43:02.864388  533789 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0923 12:43:02.864392  533789 node_conditions.go:123] node cpu capacity is 2
	I0923 12:43:02.864395  533789 node_conditions.go:105] duration metric: took 182.000795ms to run NodePressure ...
	I0923 12:43:02.864415  533789 start.go:241] waiting for startup goroutines ...
	I0923 12:43:02.864423  533789 start.go:246] waiting for cluster config update ...
	I0923 12:43:02.864437  533789 start.go:255] writing updated cluster config ...
	I0923 12:43:02.867410  533789 out.go:201] 
	I0923 12:43:02.869706  533789 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:43:02.869811  533789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/config.json ...
	I0923 12:43:02.872485  533789 out.go:177] * Starting "multinode-915704-m02" worker node in "multinode-915704" cluster
	I0923 12:43:02.874551  533789 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:43:02.874601  533789 cache.go:56] Caching tarball of preloaded images
	I0923 12:43:02.874772  533789 preload.go:172] Found /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:43:02.874788  533789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:43:02.874909  533789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/config.json ...
	I0923 12:43:02.875172  533789 start.go:360] acquireMachinesLock for multinode-915704-m02: {Name:mk9742766ed80b377dab18455a5851b42572655c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 12:43:02.875243  533789 start.go:364] duration metric: took 45.523µs to acquireMachinesLock for "multinode-915704-m02"
	I0923 12:43:02.875266  533789 start.go:96] Skipping create...Using existing machine configuration
	I0923 12:43:02.875273  533789 fix.go:54] fixHost starting: m02
	I0923 12:43:02.875589  533789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:43:02.875637  533789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:43:02.892119  533789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36275
	I0923 12:43:02.892686  533789 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:43:02.893237  533789 main.go:141] libmachine: Using API Version  1
	I0923 12:43:02.893260  533789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:43:02.893611  533789 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:43:02.893801  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:02.893980  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetState
	I0923 12:43:02.895752  533789 fix.go:112] recreateIfNeeded on multinode-915704-m02: state=Stopped err=<nil>
	I0923 12:43:02.895779  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	W0923 12:43:02.895945  533789 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 12:43:02.897805  533789 out.go:177] * Restarting existing kvm2 VM for "multinode-915704-m02" ...
	I0923 12:43:02.899038  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .Start
	I0923 12:43:02.899244  533789 main.go:141] libmachine: (multinode-915704-m02) Ensuring networks are active...
	I0923 12:43:02.899949  533789 main.go:141] libmachine: (multinode-915704-m02) Ensuring network default is active
	I0923 12:43:02.900312  533789 main.go:141] libmachine: (multinode-915704-m02) Ensuring network mk-multinode-915704 is active
	I0923 12:43:02.900730  533789 main.go:141] libmachine: (multinode-915704-m02) Getting domain xml...
	I0923 12:43:02.901474  533789 main.go:141] libmachine: (multinode-915704-m02) Creating domain...
	I0923 12:43:04.178482  533789 main.go:141] libmachine: (multinode-915704-m02) Waiting to get IP...
	I0923 12:43:04.179466  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:04.179908  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:04.180050  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:04.179894  534127 retry.go:31] will retry after 194.461682ms: waiting for machine to come up
	I0923 12:43:04.376567  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:04.377074  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:04.377095  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:04.377041  534127 retry.go:31] will retry after 313.980456ms: waiting for machine to come up
	I0923 12:43:04.692688  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:04.693152  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:04.693181  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:04.693099  534127 retry.go:31] will retry after 372.052091ms: waiting for machine to come up
	I0923 12:43:05.066905  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:05.067467  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:05.067493  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:05.067400  534127 retry.go:31] will retry after 517.898255ms: waiting for machine to come up
	I0923 12:43:05.587278  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:05.587797  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:05.587820  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:05.587744  534127 retry.go:31] will retry after 577.41604ms: waiting for machine to come up
	I0923 12:43:06.166681  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:06.167292  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:06.167323  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:06.167225  534127 retry.go:31] will retry after 585.584403ms: waiting for machine to come up
	I0923 12:43:06.754060  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:06.754483  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:06.754509  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:06.754445  534127 retry.go:31] will retry after 916.565306ms: waiting for machine to come up
	I0923 12:43:07.672599  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:07.673022  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:07.673048  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:07.672961  534127 retry.go:31] will retry after 1.163367164s: waiting for machine to come up
	I0923 12:43:08.837923  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:08.838450  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:08.838481  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:08.838395  534127 retry.go:31] will retry after 1.723378142s: waiting for machine to come up
	I0923 12:43:10.563892  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:10.564385  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:10.564419  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:10.564317  534127 retry.go:31] will retry after 1.435511952s: waiting for machine to come up
	I0923 12:43:12.002007  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:12.002402  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:12.002446  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:12.002335  534127 retry.go:31] will retry after 2.28980358s: waiting for machine to come up
	I0923 12:43:14.294786  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:14.295296  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:14.295318  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:14.295251  534127 retry.go:31] will retry after 3.244708075s: waiting for machine to come up
	I0923 12:43:17.543676  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:17.544065  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | unable to find current IP address of domain multinode-915704-m02 in network mk-multinode-915704
	I0923 12:43:17.544088  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | I0923 12:43:17.544031  534127 retry.go:31] will retry after 3.435624001s: waiting for machine to come up
	I0923 12:43:20.983033  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:20.983584  533789 main.go:141] libmachine: (multinode-915704-m02) Found IP for machine: 192.168.39.118
	I0923 12:43:20.983613  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has current primary IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:20.983622  533789 main.go:141] libmachine: (multinode-915704-m02) Reserving static IP address...
	I0923 12:43:20.984009  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "multinode-915704-m02", mac: "52:54:00:38:ce:58", ip: "192.168.39.118"} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:20.984043  533789 main.go:141] libmachine: (multinode-915704-m02) Reserved static IP address: 192.168.39.118
	I0923 12:43:20.984063  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | skip adding static IP to network mk-multinode-915704 - found existing host DHCP lease matching {name: "multinode-915704-m02", mac: "52:54:00:38:ce:58", ip: "192.168.39.118"}
	I0923 12:43:20.984079  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | Getting to WaitForSSH function...
	I0923 12:43:20.984095  533789 main.go:141] libmachine: (multinode-915704-m02) Waiting for SSH to be available...
	I0923 12:43:20.986371  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:20.986706  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:20.986745  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:20.986918  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | Using SSH client type: external
	I0923 12:43:20.986945  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa (-rw-------)
	I0923 12:43:20.986968  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.118 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0923 12:43:20.986976  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | About to run SSH command:
	I0923 12:43:20.986986  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | exit 0
	I0923 12:43:21.110726  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | SSH cmd err, output: <nil>: 
	I0923 12:43:21.111181  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetConfigRaw
	I0923 12:43:21.111857  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetIP
	I0923 12:43:21.114945  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.115357  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.115388  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.115651  533789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/multinode-915704/config.json ...
	I0923 12:43:21.115939  533789 machine.go:93] provisionDockerMachine start ...
	I0923 12:43:21.115967  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:21.116201  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.118603  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.119001  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.119042  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.119187  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.119347  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.119532  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.119620  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.119767  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:21.119948  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:21.119962  533789 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:43:21.223056  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0923 12:43:21.223100  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetMachineName
	I0923 12:43:21.223405  533789 buildroot.go:166] provisioning hostname "multinode-915704-m02"
	I0923 12:43:21.223435  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetMachineName
	I0923 12:43:21.223622  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.226312  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.226687  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.226716  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.226867  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.227062  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.227255  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.227425  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.227720  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:21.227904  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:21.227917  533789 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-915704-m02 && echo "multinode-915704-m02" | sudo tee /etc/hostname
	I0923 12:43:21.344379  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-915704-m02
	
	I0923 12:43:21.344414  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.347221  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.347590  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.347629  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.347793  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.348006  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.348220  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.348372  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.348628  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:21.348791  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:21.348808  533789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-915704-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-915704-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-915704-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:43:21.459411  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:43:21.459455  533789 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19690-497735/.minikube CaCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19690-497735/.minikube}
	I0923 12:43:21.459481  533789 buildroot.go:174] setting up certificates
	I0923 12:43:21.459506  533789 provision.go:84] configureAuth start
	I0923 12:43:21.459526  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetMachineName
	I0923 12:43:21.459874  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetIP
	I0923 12:43:21.462864  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.463406  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.463452  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.463587  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.466184  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.466582  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.466614  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.466740  533789 provision.go:143] copyHostCerts
	I0923 12:43:21.466797  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem
	I0923 12:43:21.466864  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem, removing ...
	I0923 12:43:21.466877  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem
	I0923 12:43:21.466955  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/ca.pem (1078 bytes)
	I0923 12:43:21.467057  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem
	I0923 12:43:21.467083  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem, removing ...
	I0923 12:43:21.467091  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem
	I0923 12:43:21.467132  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/cert.pem (1123 bytes)
	I0923 12:43:21.467193  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem
	I0923 12:43:21.467218  533789 exec_runner.go:144] found /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem, removing ...
	I0923 12:43:21.467227  533789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem
	I0923 12:43:21.467264  533789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19690-497735/.minikube/key.pem (1679 bytes)
	I0923 12:43:21.467330  533789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca-key.pem org=jenkins.multinode-915704-m02 san=[127.0.0.1 192.168.39.118 localhost minikube multinode-915704-m02]
	I0923 12:43:21.693555  533789 provision.go:177] copyRemoteCerts
	I0923 12:43:21.693646  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:43:21.693679  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.696546  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.696868  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.696895  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.697060  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.697311  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.697511  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.697665  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa Username:docker}
	I0923 12:43:21.777359  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0923 12:43:21.777471  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 12:43:21.802409  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0923 12:43:21.802483  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0923 12:43:21.826698  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0923 12:43:21.826801  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 12:43:21.851165  533789 provision.go:87] duration metric: took 391.640159ms to configureAuth
	I0923 12:43:21.851199  533789 buildroot.go:189] setting minikube options for container-runtime
	I0923 12:43:21.851471  533789 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:43:21.851513  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:21.851834  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.854632  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.855076  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.855102  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.855197  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.855415  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.855570  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.855730  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.855923  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:21.856117  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:21.856130  533789 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:43:21.960436  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0923 12:43:21.960468  533789 buildroot.go:70] root file system type: tmpfs
	I0923 12:43:21.960625  533789 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:43:21.960649  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:21.963696  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.964127  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:21.964157  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:21.964351  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:21.964568  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.964761  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:21.964917  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:21.965077  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:21.965283  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:21.965354  533789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.39.233"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:43:22.085440  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.39.233
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:43:22.085480  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:22.088244  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:22.088716  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:22.088747  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:22.089034  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:22.089357  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:22.089559  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:22.089753  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:22.089945  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:22.090112  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:22.090129  533789 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:43:23.910473  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0923 12:43:23.910507  533789 machine.go:96] duration metric: took 2.794550939s to provisionDockerMachine
	I0923 12:43:23.910521  533789 start.go:293] postStartSetup for "multinode-915704-m02" (driver="kvm2")
	I0923 12:43:23.910532  533789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:43:23.910547  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:23.910892  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:43:23.910929  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:23.913814  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:23.914266  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:23.914297  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:23.914475  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:23.914697  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:23.914916  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:23.915168  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa Username:docker}
	I0923 12:43:24.001712  533789 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:43:24.005810  533789 command_runner.go:130] > NAME=Buildroot
	I0923 12:43:24.005836  533789 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0923 12:43:24.005842  533789 command_runner.go:130] > ID=buildroot
	I0923 12:43:24.005849  533789 command_runner.go:130] > VERSION_ID=2023.02.9
	I0923 12:43:24.005856  533789 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0923 12:43:24.005921  533789 info.go:137] Remote host: Buildroot 2023.02.9
	I0923 12:43:24.005948  533789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-497735/.minikube/addons for local assets ...
	I0923 12:43:24.006026  533789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-497735/.minikube/files for local assets ...
	I0923 12:43:24.006114  533789 filesync.go:149] local asset: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem -> 5050122.pem in /etc/ssl/certs
	I0923 12:43:24.006127  533789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem -> /etc/ssl/certs/5050122.pem
	I0923 12:43:24.006237  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:43:24.022068  533789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/ssl/certs/5050122.pem --> /etc/ssl/certs/5050122.pem (1708 bytes)
	I0923 12:43:24.044398  533789 start.go:296] duration metric: took 133.860153ms for postStartSetup
	I0923 12:43:24.044446  533789 fix.go:56] duration metric: took 21.169173966s for fixHost
	I0923 12:43:24.044469  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:24.047631  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.048034  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:24.048063  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.048317  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:24.048593  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:24.048754  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:24.048925  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:24.049156  533789 main.go:141] libmachine: Using SSH client type: native
	I0923 12:43:24.049376  533789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0923 12:43:24.049393  533789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0923 12:43:24.151731  533789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1727095404.121993109
	
	I0923 12:43:24.151771  533789 fix.go:216] guest clock: 1727095404.121993109
	I0923 12:43:24.151786  533789 fix.go:229] Guest: 2024-09-23 12:43:24.121993109 +0000 UTC Remote: 2024-09-23 12:43:24.04445047 +0000 UTC m=+89.882899320 (delta=77.542639ms)
	I0923 12:43:24.151806  533789 fix.go:200] guest clock delta is within tolerance: 77.542639ms
	I0923 12:43:24.151813  533789 start.go:83] releasing machines lock for "multinode-915704-m02", held for 21.276556268s
	I0923 12:43:24.151838  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:24.152184  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetIP
	I0923 12:43:24.155205  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.155516  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:24.155541  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.157772  533789 out.go:177] * Found network options:
	I0923 12:43:24.159419  533789 out.go:177]   - NO_PROXY=192.168.39.233
	W0923 12:43:24.160720  533789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:43:24.160761  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:24.161440  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:24.161677  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:43:24.161792  533789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 12:43:24.161836  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	W0923 12:43:24.161858  533789 proxy.go:119] fail to check proxy env: Error ip not in block
	I0923 12:43:24.161952  533789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:43:24.161973  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:43:24.164777  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.164803  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.165154  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:24.165185  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.165213  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:43:13 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:43:24.165228  533789 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:43:24.165443  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:24.165609  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:24.165616  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:43:24.165805  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:43:24.165822  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:24.165962  533789 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:43:24.165957  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa Username:docker}
	I0923 12:43:24.166069  533789 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa Username:docker}
	I0923 12:43:24.284029  533789 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0923 12:43:24.284135  533789 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0923 12:43:24.284189  533789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 12:43:24.284259  533789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 12:43:24.300807  533789 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0923 12:43:24.300890  533789 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:43:24.300908  533789 start.go:495] detecting cgroup driver to use...
	I0923 12:43:24.301023  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:43:24.319247  533789 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 12:43:24.319534  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:43:24.329664  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:43:24.340185  533789 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:43:24.340265  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:43:24.350666  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:43:24.361156  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:43:24.371483  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:43:24.382115  533789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:43:24.393207  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:43:24.403747  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:43:24.414080  533789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:43:24.424683  533789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:43:24.433981  533789 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:43:24.434036  533789 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 12:43:24.434085  533789 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 12:43:24.443633  533789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:43:24.453496  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:43:24.585257  533789 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:43:24.609192  533789 start.go:495] detecting cgroup driver to use...
	I0923 12:43:24.609288  533789 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:43:24.634294  533789 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0923 12:43:24.634321  533789 command_runner.go:130] > [Unit]
	I0923 12:43:24.634331  533789 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 12:43:24.634339  533789 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 12:43:24.634348  533789 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0923 12:43:24.634355  533789 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0923 12:43:24.634362  533789 command_runner.go:130] > StartLimitBurst=3
	I0923 12:43:24.634368  533789 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 12:43:24.634374  533789 command_runner.go:130] > [Service]
	I0923 12:43:24.634382  533789 command_runner.go:130] > Type=notify
	I0923 12:43:24.634389  533789 command_runner.go:130] > Restart=on-failure
	I0923 12:43:24.634400  533789 command_runner.go:130] > Environment=NO_PROXY=192.168.39.233
	I0923 12:43:24.634414  533789 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 12:43:24.634430  533789 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 12:43:24.634444  533789 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 12:43:24.634456  533789 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 12:43:24.634471  533789 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 12:43:24.634482  533789 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 12:43:24.634496  533789 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 12:43:24.634511  533789 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 12:43:24.634525  533789 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 12:43:24.634533  533789 command_runner.go:130] > ExecStart=
	I0923 12:43:24.634556  533789 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	I0923 12:43:24.634567  533789 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 12:43:24.634580  533789 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 12:43:24.634594  533789 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 12:43:24.634604  533789 command_runner.go:130] > LimitNOFILE=infinity
	I0923 12:43:24.634613  533789 command_runner.go:130] > LimitNPROC=infinity
	I0923 12:43:24.634620  533789 command_runner.go:130] > LimitCORE=infinity
	I0923 12:43:24.634630  533789 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 12:43:24.634642  533789 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 12:43:24.634652  533789 command_runner.go:130] > TasksMax=infinity
	I0923 12:43:24.634660  533789 command_runner.go:130] > TimeoutStartSec=0
	I0923 12:43:24.634673  533789 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 12:43:24.634681  533789 command_runner.go:130] > Delegate=yes
	I0923 12:43:24.634692  533789 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 12:43:24.634705  533789 command_runner.go:130] > KillMode=process
	I0923 12:43:24.634712  533789 command_runner.go:130] > [Install]
	I0923 12:43:24.634723  533789 command_runner.go:130] > WantedBy=multi-user.target
	I0923 12:43:24.634814  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:43:24.656547  533789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 12:43:24.678607  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 12:43:24.693749  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:43:24.707231  533789 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0923 12:43:24.733572  533789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:43:24.747783  533789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:43:24.765745  533789 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 12:43:24.765832  533789 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:43:24.769503  533789 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 12:43:24.769646  533789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:43:24.778722  533789 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:43:24.795361  533789 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:43:24.914572  533789 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:43:25.036889  533789 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:43:25.036955  533789 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:43:25.055098  533789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:43:25.170644  533789 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:44:26.234721  533789 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0923 12:44:26.234776  533789 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0923 12:44:26.234801  533789 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.064123192s)
	I0923 12:44:26.234894  533789 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0923 12:44:26.250347  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 systemd[1]: Starting Docker Application Container Engine...
	I0923 12:44:26.250376  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.466044549Z" level=info msg="Starting up"
	I0923 12:44:26.250388  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.467558463Z" level=info msg="containerd not running, starting managed containerd"
	I0923 12:44:26.250402  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.468352110Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
	I0923 12:44:26.250416  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.495664251Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I0923 12:44:26.250438  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.515767190Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0923 12:44:26.250461  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.515914325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0923 12:44:26.250472  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516007875Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0923 12:44:26.250483  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516050723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250499  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516384302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0923 12:44:26.250510  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516483534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250541  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516683546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0923 12:44:26.250564  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516800268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250578  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516843411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0923 12:44:26.250589  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516884445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250600  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.517142642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250615  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.517424377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250641  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.519741332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0923 12:44:26.250654  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.519863033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0923 12:44:26.250679  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520058313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0923 12:44:26.250698  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520109934Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0923 12:44:26.250716  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520416385Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0923 12:44:26.250731  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520546340Z" level=info msg="metadata content store policy set" policy=shared
	I0923 12:44:26.250746  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.523911761Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0923 12:44:26.250776  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.523997010Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0923 12:44:26.250792  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524014748Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0923 12:44:26.250808  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524032855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0923 12:44:26.250822  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524050629Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0923 12:44:26.250837  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524179075Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0923 12:44:26.250851  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524510950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0923 12:44:26.250867  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524615290Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0923 12:44:26.250883  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524647631Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0923 12:44:26.250918  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524662622Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0923 12:44:26.250940  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524674957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.250957  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524686603Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.250978  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524733937Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.250998  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524749023Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.251017  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524762887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.251034  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524777825Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.251059  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524798426Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.251095  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524814763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0923 12:44:26.251106  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524842641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251119  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524855948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251131  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524866824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251143  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524877864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251155  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524888510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251167  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524899401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251178  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524909731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251190  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524927140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251202  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524939393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251218  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524952590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251231  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524962502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251243  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524973115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251255  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524983575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251267  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524996839Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0923 12:44:26.251279  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525020872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251291  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525031620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251303  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525043318Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0923 12:44:26.251320  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525116754Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0923 12:44:26.251336  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525139796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0923 12:44:26.251349  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525150902Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0923 12:44:26.251365  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525166046Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0923 12:44:26.251379  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525175859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0923 12:44:26.251391  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525186773Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0923 12:44:26.251403  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525203359Z" level=info msg="NRI interface is disabled by configuration."
	I0923 12:44:26.251414  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526104835Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0923 12:44:26.251424  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526242000Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0923 12:44:26.251433  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526369097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0923 12:44:26.251441  533789 command_runner.go:130] > Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526899015Z" level=info msg="containerd successfully booted in 0.032473s"
	I0923 12:44:26.251450  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.500430476Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0923 12:44:26.251460  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.525855967Z" level=info msg="Loading containers: start."
	I0923 12:44:26.251481  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.672424233Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0923 12:44:26.251495  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.769348274Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0923 12:44:26.251506  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.829820116Z" level=info msg="Loading containers: done."
	I0923 12:44:26.251521  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.843805067Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0923 12:44:26.251533  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.843946913Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0923 12:44:26.251547  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.844043912Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	I0923 12:44:26.251558  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.844468504Z" level=info msg="Daemon has completed initialization"
	I0923 12:44:26.251572  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.878906159Z" level=info msg="API listen on /var/run/docker.sock"
	I0923 12:44:26.251582  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.879022375Z" level=info msg="API listen on [::]:2376"
	I0923 12:44:26.251592  533789 command_runner.go:130] > Sep 23 12:43:23 multinode-915704-m02 systemd[1]: Started Docker Application Container Engine.
	I0923 12:44:26.251601  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0923 12:44:26.251612  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.155657450Z" level=info msg="Processing signal 'terminated'"
	I0923 12:44:26.251625  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157487813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0923 12:44:26.251641  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157735426Z" level=info msg="Daemon shutdown complete"
	I0923 12:44:26.251672  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157814344Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0923 12:44:26.251686  533789 command_runner.go:130] > Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157847761Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0923 12:44:26.251695  533789 command_runner.go:130] > Sep 23 12:43:26 multinode-915704-m02 systemd[1]: docker.service: Deactivated successfully.
	I0923 12:44:26.251707  533789 command_runner.go:130] > Sep 23 12:43:26 multinode-915704-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0923 12:44:26.251721  533789 command_runner.go:130] > Sep 23 12:43:26 multinode-915704-m02 systemd[1]: Starting Docker Application Container Engine...
	I0923 12:44:26.251736  533789 command_runner.go:130] > Sep 23 12:43:26 multinode-915704-m02 dockerd[833]: time="2024-09-23T12:43:26.191330905Z" level=info msg="Starting up"
	I0923 12:44:26.251755  533789 command_runner.go:130] > Sep 23 12:44:26 multinode-915704-m02 dockerd[833]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0923 12:44:26.251769  533789 command_runner.go:130] > Sep 23 12:44:26 multinode-915704-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0923 12:44:26.251783  533789 command_runner.go:130] > Sep 23 12:44:26 multinode-915704-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0923 12:44:26.251797  533789 command_runner.go:130] > Sep 23 12:44:26 multinode-915704-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0923 12:44:26.258302  533789 out.go:201] 
	W0923 12:44:26.259744  533789 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Sep 23 12:43:22 multinode-915704-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.466044549Z" level=info msg="Starting up"
	Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.467558463Z" level=info msg="containerd not running, starting managed containerd"
	Sep 23 12:43:22 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:22.468352110Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.495664251Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.515767190Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.515914325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516007875Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516050723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516384302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516483534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516683546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516800268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516843411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.516884445Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.517142642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.517424377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.519741332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.519863033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520058313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520109934Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520416385Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.520546340Z" level=info msg="metadata content store policy set" policy=shared
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.523911761Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.523997010Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524014748Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524032855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524050629Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524179075Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524510950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524615290Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524647631Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524662622Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524674957Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524686603Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524733937Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524749023Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524762887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524777825Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524798426Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524814763Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524842641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524855948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524866824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524877864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524888510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524899401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524909731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524927140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524939393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524952590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524962502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524973115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524983575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.524996839Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525020872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525031620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525043318Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525116754Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525139796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525150902Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525166046Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525175859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525186773Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.525203359Z" level=info msg="NRI interface is disabled by configuration."
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526104835Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526242000Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526369097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Sep 23 12:43:22 multinode-915704-m02 dockerd[500]: time="2024-09-23T12:43:22.526899015Z" level=info msg="containerd successfully booted in 0.032473s"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.500430476Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.525855967Z" level=info msg="Loading containers: start."
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.672424233Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.769348274Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.829820116Z" level=info msg="Loading containers: done."
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.843805067Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.843946913Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.844043912Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.0
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.844468504Z" level=info msg="Daemon has completed initialization"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.878906159Z" level=info msg="API listen on /var/run/docker.sock"
	Sep 23 12:43:23 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:23.879022375Z" level=info msg="API listen on [::]:2376"
	Sep 23 12:43:23 multinode-915704-m02 systemd[1]: Started Docker Application Container Engine.
	Sep 23 12:43:25 multinode-915704-m02 systemd[1]: Stopping Docker Application Container Engine...
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.155657450Z" level=info msg="Processing signal 'terminated'"
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157487813Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157735426Z" level=info msg="Daemon shutdown complete"
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157814344Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Sep 23 12:43:25 multinode-915704-m02 dockerd[493]: time="2024-09-23T12:43:25.157847761Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Sep 23 12:43:26 multinode-915704-m02 systemd[1]: docker.service: Deactivated successfully.
	Sep 23 12:43:26 multinode-915704-m02 systemd[1]: Stopped Docker Application Container Engine.
	Sep 23 12:43:26 multinode-915704-m02 systemd[1]: Starting Docker Application Container Engine...
	Sep 23 12:43:26 multinode-915704-m02 dockerd[833]: time="2024-09-23T12:43:26.191330905Z" level=info msg="Starting up"
	Sep 23 12:44:26 multinode-915704-m02 dockerd[833]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Sep 23 12:44:26 multinode-915704-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Sep 23 12:44:26 multinode-915704-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Sep 23 12:44:26 multinode-915704-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0923 12:44:26.259792  533789 out.go:270] * 
	W0923 12:44:26.260711  533789 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 12:44:26.262242  533789 out.go:201] 
	
	
	==> Docker <==
	Sep 23 12:42:58 multinode-915704 dockerd[866]: time="2024-09-23T12:42:58.406943596Z" level=info msg="shim disconnected" id=a7d098fca98a3f12ea4e1585d19853c2f768acd699ee55c0ba8c84df9cce3156 namespace=moby
	Sep 23 12:42:58 multinode-915704 dockerd[866]: time="2024-09-23T12:42:58.407536404Z" level=warning msg="cleaning up after shim disconnected" id=a7d098fca98a3f12ea4e1585d19853c2f768acd699ee55c0ba8c84df9cce3156 namespace=moby
	Sep 23 12:42:58 multinode-915704 dockerd[866]: time="2024-09-23T12:42:58.407591470Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.372990935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.373062413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.373166311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.374577427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:42:59 multinode-915704 cri-dockerd[1147]: time="2024-09-23T12:42:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ab739920f69044a396f6363e8a86fb4c7978eca8a81e5539320b4c494d2f714f/resolv.conf as [nameserver 192.168.122.1]"
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.623660991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.623763913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.624202951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.624385850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.738266378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.738747640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.738885454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.739277167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:42:59 multinode-915704 cri-dockerd[1147]: time="2024-09-23T12:42:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d2baead2c72c575fc9eeedb7ce20245abfda38ad0b373c50aa5792249f042b99/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.993925764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.994004280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.994215717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:42:59 multinode-915704 dockerd[866]: time="2024-09-23T12:42:59.994445576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:43:11 multinode-915704 dockerd[866]: time="2024-09-23T12:43:11.656962722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 23 12:43:11 multinode-915704 dockerd[866]: time="2024-09-23T12:43:11.659214256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 23 12:43:11 multinode-915704 dockerd[866]: time="2024-09-23T12:43:11.659401337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 23 12:43:11 multinode-915704 dockerd[866]: time="2024-09-23T12:43:11.659684933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	61b2a0dcad400       6e38f40d628db       About a minute ago   Running             storage-provisioner       4                   7df276635e782       storage-provisioner
	11afe9d9cd0dc       8c811b4aec35f       About a minute ago   Running             busybox                   2                   d2baead2c72c5       busybox-7dff88458-h6n4v
	94136436c3771       c69fa2e9cbf5f       About a minute ago   Running             coredns                   2                   ab739920f6904       coredns-7c65d6cfc9-s5jv2
	d6860118debf8       12968670680f4       About a minute ago   Running             kindnet-cni               2                   699c0ea1e9118       kindnet-kt7cw
	a7d098fca98a3       6e38f40d628db       About a minute ago   Exited              storage-provisioner       3                   7df276635e782       storage-provisioner
	c9814a8766115       60c005f310ff3       2 minutes ago        Running             kube-proxy                2                   9f0f60850c8bd       kube-proxy-rmgjt
	9fb5e3f2dbb61       9aa1fad941575       2 minutes ago        Running             kube-scheduler            2                   9398e14aee503       kube-scheduler-multinode-915704
	526e8b87ed5a1       6bab7719df100       2 minutes ago        Running             kube-apiserver            2                   3242f294748ec       kube-apiserver-multinode-915704
	438be3e5ef367       2e96e5913fc06       2 minutes ago        Running             etcd                      2                   02efff1241ddb       etcd-multinode-915704
	a962c406d7c9c       175ffd71cce3d       2 minutes ago        Running             kube-controller-manager   2                   39b1788dd8a4a       kube-controller-manager-multinode-915704
	12866ca45ac6e       8c811b4aec35f       4 minutes ago        Exited              busybox                   1                   9a76938435788       busybox-7dff88458-h6n4v
	d8c6fd4c36456       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   1                   7fd3389600c28       coredns-7c65d6cfc9-s5jv2
	f514f107aa3b5       12968670680f4       4 minutes ago        Exited              kindnet-cni               1                   496b1236003c9       kindnet-kt7cw
	e9ab80b3cbfcf       60c005f310ff3       4 minutes ago        Exited              kube-proxy                1                   de517e94d278f       kube-proxy-rmgjt
	80c39f229adc4       9aa1fad941575       4 minutes ago        Exited              kube-scheduler            1                   71016b8c92e50       kube-scheduler-multinode-915704
	1b119ee22f961       2e96e5913fc06       4 minutes ago        Exited              etcd                      1                   40e23befdd459       etcd-multinode-915704
	2b978cfcf3ae4       6bab7719df100       4 minutes ago        Exited              kube-apiserver            1                   3076f80c7c38f       kube-apiserver-multinode-915704
	3bb7d4eec4095       175ffd71cce3d       4 minutes ago        Exited              kube-controller-manager   1                   68460215bbe10       kube-controller-manager-multinode-915704
	
	
	==> coredns [94136436c377] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57807 - 20855 "HINFO IN 5315841011483553666.8107645605776590544. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012325193s
	
	
	==> coredns [d8c6fd4c3645] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:58488 - 52380 "HINFO IN 1261490610189019025.5747600969276404062. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015977116s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-915704
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-915704
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-915704
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T12_35_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:35:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-915704
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:44:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:42:47 +0000   Mon, 23 Sep 2024 12:35:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:42:47 +0000   Mon, 23 Sep 2024 12:35:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:42:47 +0000   Mon, 23 Sep 2024 12:35:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:42:47 +0000   Mon, 23 Sep 2024 12:42:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.233
	  Hostname:    multinode-915704
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc4ecae632974cd9bd94a18fbe0f75af
	  System UUID:                bc4ecae6-3297-4cd9-bd94-a18fbe0f75af
	  Boot ID:                    e3f99051-ca9d-4c64-a88a-7dfbb4833228
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-h6n4v                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 coredns-7c65d6cfc9-s5jv2                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m7s
	  kube-system                 etcd-multinode-915704                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m13s
	  kube-system                 kindnet-kt7cw                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m8s
	  kube-system                 kube-apiserver-multinode-915704             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-controller-manager-multinode-915704    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-proxy-rmgjt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-scheduler-multinode-915704             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  Starting                 118s                   kube-proxy       
	  Normal  Starting                 4m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m13s                  kubelet          Node multinode-915704 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m13s                  kubelet          Node multinode-915704 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s                  kubelet          Node multinode-915704 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m13s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m9s                   node-controller  Node multinode-915704 event: Registered Node multinode-915704 in Controller
	  Normal  NodeReady                8m50s                  kubelet          Node multinode-915704 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node multinode-915704 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node multinode-915704 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node multinode-915704 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m51s                  node-controller  Node multinode-915704 event: Registered Node multinode-915704 in Controller
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)    kubelet          Node multinode-915704 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)    kubelet          Node multinode-915704 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)    kubelet          Node multinode-915704 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           117s                   node-controller  Node multinode-915704 event: Registered Node multinode-915704 in Controller
	
	
	Name:               multinode-915704-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-915704-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=multinode-915704
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_23T12_40_23_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 12:40:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-915704-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:41:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 23 Sep 2024 12:40:37 +0000   Mon, 23 Sep 2024 12:43:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 23 Sep 2024 12:40:37 +0000   Mon, 23 Sep 2024 12:43:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 23 Sep 2024 12:40:37 +0000   Mon, 23 Sep 2024 12:43:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 23 Sep 2024 12:40:37 +0000   Mon, 23 Sep 2024 12:43:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    multinode-915704-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fff78a8288904fb5ba314a854d3d4d6e
	  System UUID:                fff78a82-8890-4fb5-ba31-4a854d3d4d6e
	  Boot ID:                    682eb53e-7f10-477a-9eac-bd55ecf2e949
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5pg4r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kindnet-cddh6              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m17s
	  kube-system                 kube-proxy-hgdzz           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  Starting                 8m10s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  8m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m17s (x2 over 8m18s)  kubelet          Node multinode-915704-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m17s (x2 over 8m18s)  kubelet          Node multinode-915704-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m17s (x2 over 8m18s)  kubelet          Node multinode-915704-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                7m55s                  kubelet          Node multinode-915704-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  4m4s (x2 over 4m4s)    kubelet          Node multinode-915704-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x2 over 4m4s)    kubelet          Node multinode-915704-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x2 over 4m4s)    kubelet          Node multinode-915704-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m50s                  kubelet          Node multinode-915704-m02 status is now: NodeReady
	  Normal  RegisteredNode           117s                   node-controller  Node multinode-915704-m02 event: Registered Node multinode-915704-m02 in Controller
	  Normal  NodeNotReady             77s                    node-controller  Node multinode-915704-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.036986] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep23 12:42] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.937845] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.548570] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.300996] systemd-fstab-generator[477]: Ignoring "noauto" option for root device
	[  +0.061426] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058957] systemd-fstab-generator[489]: Ignoring "noauto" option for root device
	[  +2.173028] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +0.295116] systemd-fstab-generator[825]: Ignoring "noauto" option for root device
	[  +0.117331] systemd-fstab-generator[837]: Ignoring "noauto" option for root device
	[  +0.128227] systemd-fstab-generator[851]: Ignoring "noauto" option for root device
	[  +2.252702] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.303656] systemd-fstab-generator[1100]: Ignoring "noauto" option for root device
	[  +0.113721] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +0.127828] systemd-fstab-generator[1124]: Ignoring "noauto" option for root device
	[  +0.151648] systemd-fstab-generator[1139]: Ignoring "noauto" option for root device
	[  +0.465417] systemd-fstab-generator[1268]: Ignoring "noauto" option for root device
	[  +1.707076] systemd-fstab-generator[1399]: Ignoring "noauto" option for root device
	[  +2.152146] kauditd_printk_skb: 229 callbacks suppressed
	[  +5.035711] kauditd_printk_skb: 60 callbacks suppressed
	[  +1.763982] systemd-fstab-generator[2227]: Ignoring "noauto" option for root device
	[  +4.076054] kauditd_printk_skb: 23 callbacks suppressed
	[Sep23 12:43] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [1b119ee22f96] <==
	{"level":"info","ts":"2024-09-23T12:39:31.705036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T12:39:31.705191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 received MsgPreVoteResp from 678e262213f11973 at term 2"}
	{"level":"info","ts":"2024-09-23T12:39:31.705407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T12:39:31.705542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 received MsgVoteResp from 678e262213f11973 at term 3"}
	{"level":"info","ts":"2024-09-23T12:39:31.705628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became leader at term 3"}
	{"level":"info","ts":"2024-09-23T12:39:31.705700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 678e262213f11973 elected leader 678e262213f11973 at term 3"}
	{"level":"info","ts":"2024-09-23T12:39:31.710437Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"678e262213f11973","local-member-attributes":"{Name:multinode-915704 ClientURLs:[https://192.168.39.233:2379]}","request-path":"/0/members/678e262213f11973/attributes","cluster-id":"30d9b598be045872","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T12:39:31.710648Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T12:39:31.710960Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T12:39:31.711944Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T12:39:31.711977Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T12:39:31.712950Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T12:39:31.713137Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T12:39:31.714056Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.233:2379"}
	{"level":"info","ts":"2024-09-23T12:39:31.714247Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T12:41:41.887576Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T12:41:41.887722Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-915704","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"]}
	{"level":"warn","ts":"2024-09-23T12:41:41.887876Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T12:41:41.887996Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T12:41:41.931823Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.233:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T12:41:41.931889Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.233:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T12:41:41.931973Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"678e262213f11973","current-leader-member-id":"678e262213f11973"}
	{"level":"info","ts":"2024-09-23T12:41:41.936771Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-09-23T12:41:41.936923Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-09-23T12:41:41.936946Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-915704","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"]}
	
	
	==> etcd [438be3e5ef36] <==
	{"level":"info","ts":"2024-09-23T12:42:24.664904Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"30d9b598be045872","local-member-id":"678e262213f11973","added-peer-id":"678e262213f11973","added-peer-peer-urls":["https://192.168.39.233:2380"]}
	{"level":"info","ts":"2024-09-23T12:42:24.665214Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"30d9b598be045872","local-member-id":"678e262213f11973","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T12:42:24.665367Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T12:42:24.661221Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T12:42:24.671889Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-23T12:42:24.675027Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-09-23T12:42:24.675211Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.233:2380"}
	{"level":"info","ts":"2024-09-23T12:42:24.676023Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"678e262213f11973","initial-advertise-peer-urls":["https://192.168.39.233:2380"],"listen-peer-urls":["https://192.168.39.233:2380"],"advertise-client-urls":["https://192.168.39.233:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.233:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-23T12:42:24.676520Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T12:42:25.739179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-23T12:42:25.739309Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-23T12:42:25.739360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 received MsgPreVoteResp from 678e262213f11973 at term 3"}
	{"level":"info","ts":"2024-09-23T12:42:25.739390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became candidate at term 4"}
	{"level":"info","ts":"2024-09-23T12:42:25.739425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 received MsgVoteResp from 678e262213f11973 at term 4"}
	{"level":"info","ts":"2024-09-23T12:42:25.739447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"678e262213f11973 became leader at term 4"}
	{"level":"info","ts":"2024-09-23T12:42:25.739471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 678e262213f11973 elected leader 678e262213f11973 at term 4"}
	{"level":"info","ts":"2024-09-23T12:42:25.746262Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"678e262213f11973","local-member-attributes":"{Name:multinode-915704 ClientURLs:[https://192.168.39.233:2379]}","request-path":"/0/members/678e262213f11973/attributes","cluster-id":"30d9b598be045872","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T12:42:25.746536Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T12:42:25.746803Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T12:42:25.747299Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T12:42:25.750111Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T12:42:25.751136Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T12:42:25.754826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.233:2379"}
	{"level":"info","ts":"2024-09-23T12:42:25.758325Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T12:42:25.765009Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:44:27 up 2 min,  0 users,  load average: 0.16, 0.16, 0.07
	Linux multinode-915704 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d6860118debf] <==
	I0923 12:43:19.625975       1 main.go:299] handling current node
	I0923 12:43:29.631146       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0923 12:43:29.631178       1 main.go:299] handling current node
	I0923 12:43:29.631191       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0923 12:43:29.631196       1 main.go:322] Node multinode-915704-m02 has CIDR [10.244.1.0/24] 
	I0923 12:43:39.633360       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0923 12:43:39.633451       1 main.go:299] handling current node
	I0923 12:43:39.633489       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0923 12:43:39.633495       1 main.go:322] Node multinode-915704-m02 has CIDR [10.244.1.0/24] 
	I0923 12:43:49.626820       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0923 12:43:49.626877       1 main.go:299] handling current node
	I0923 12:43:49.626895       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0923 12:43:49.626901       1 main.go:322] Node multinode-915704-m02 has CIDR [10.244.1.0/24] 
	I0923 12:43:59.626735       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0923 12:43:59.626854       1 main.go:299] handling current node
	I0923 12:43:59.626893       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0923 12:43:59.626941       1 main.go:322] Node multinode-915704-m02 has CIDR [10.244.1.0/24] 
	I0923 12:44:09.633343       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0923 12:44:09.633445       1 main.go:299] handling current node
	I0923 12:44:09.633481       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0923 12:44:09.633532       1 main.go:322] Node multinode-915704-m02 has CIDR [10.244.1.0/24] 
	I0923 12:44:19.625228       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0923 12:44:19.625278       1 main.go:299] handling current node
	I0923 12:44:19.625299       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0923 12:44:19.625328       1 main.go:322] Node multinode-915704-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f514f107aa3b] <==
	I0923 12:40:55.671708       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0923 12:40:55.671809       1 main.go:322] Node multinode-915704-m03 has CIDR [10.244.3.0/24] 
	I0923 12:41:05.668315       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0923 12:41:05.668591       1 main.go:322] Node multinode-915704-m03 has CIDR [10.244.3.0/24] 
	I0923 12:41:05.668912       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0923 12:41:05.669199       1 main.go:299] handling current node
	I0923 12:41:05.669412       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0923 12:41:05.669579       1 main.go:322] Node multinode-915704-m02 has CIDR [10.244.1.0/24] 
	I0923 12:41:15.669787       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0923 12:41:15.669859       1 main.go:299] handling current node
	I0923 12:41:15.669881       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0923 12:41:15.669887       1 main.go:322] Node multinode-915704-m02 has CIDR [10.244.1.0/24] 
	I0923 12:41:15.670325       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0923 12:41:15.670356       1 main.go:322] Node multinode-915704-m03 has CIDR [10.244.2.0/24] 
	I0923 12:41:15.670696       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.77 Flags: [] Table: 0} 
	I0923 12:41:25.667750       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0923 12:41:25.667999       1 main.go:299] handling current node
	I0923 12:41:25.668148       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0923 12:41:25.668271       1 main.go:322] Node multinode-915704-m02 has CIDR [10.244.1.0/24] 
	I0923 12:41:25.668530       1 main.go:295] Handling node with IPs: map[192.168.39.77:{}]
	I0923 12:41:25.668631       1 main.go:322] Node multinode-915704-m03 has CIDR [10.244.2.0/24] 
	I0923 12:41:35.668331       1 main.go:295] Handling node with IPs: map[192.168.39.233:{}]
	I0923 12:41:35.668361       1 main.go:299] handling current node
	I0923 12:41:35.668375       1 main.go:295] Handling node with IPs: map[192.168.39.118:{}]
	I0923 12:41:35.668379       1 main.go:322] Node multinode-915704-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [2b978cfcf3ae] <==
	W0923 12:41:51.193337       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.262076       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.277145       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.290770       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.366995       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.383382       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.513220       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.515094       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.591974       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.595602       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.654625       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.654625       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.670682       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.696740       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.711847       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.714336       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.727069       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.750750       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.765392       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.803329       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.824858       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.840679       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.840684       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.919606       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 12:41:51.927123       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [526e8b87ed5a] <==
	I0923 12:42:27.356112       1 policy_source.go:224] refreshing policies
	I0923 12:42:27.369493       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 12:42:27.369591       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 12:42:27.369717       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 12:42:27.369842       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 12:42:27.370372       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 12:42:27.371524       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 12:42:27.372611       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 12:42:27.373352       1 aggregator.go:171] initial CRD sync complete...
	I0923 12:42:27.373612       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 12:42:27.373665       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 12:42:27.373685       1 cache.go:39] Caches are synced for autoregister controller
	I0923 12:42:27.394136       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 12:42:27.430570       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 12:42:27.447854       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 12:42:27.471777       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0923 12:42:27.478304       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0923 12:42:28.234646       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 12:42:29.108734       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0923 12:42:29.346181       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0923 12:42:29.370800       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0923 12:42:29.458735       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0923 12:42:29.471920       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0923 12:42:30.830632       1 controller.go:615] quota admission added evaluator for: endpoints
	I0923 12:42:30.865006       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3bb7d4eec409] <==
	I0923 12:40:50.622222       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.666µs"
	I0923 12:40:51.635024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.075177ms"
	I0923 12:40:51.635107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.317µs"
	I0923 12:41:05.447317       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:05.468668       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:05.798373       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-915704-m02"
	I0923 12:41:05.799438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:06.878090       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-915704-m03\" does not exist"
	I0923 12:41:06.878210       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-915704-m02"
	I0923 12:41:06.893096       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-915704-m03" podCIDRs=["10.244.2.0/24"]
	I0923 12:41:06.893179       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:06.895262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:06.903898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:07.328948       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:07.692349       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:11.753780       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:17.146190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:24.511772       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-915704-m03"
	I0923 12:41:24.512628       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:24.528104       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:26.700976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:27.265094       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:27.280023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:27.802907       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m03"
	I0923 12:41:27.803250       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-915704-m02"
	
	
	==> kube-controller-manager [a962c406d7c9] <==
	I0923 12:42:30.937026       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="97.544743ms"
	I0923 12:42:30.939354       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704"
	I0923 12:42:30.941954       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.797µs"
	I0923 12:42:31.084949       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704"
	I0923 12:42:31.244356       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 12:42:31.244389       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0923 12:42:31.279055       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 12:42:47.626993       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-915704-m02"
	I0923 12:42:47.627441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704"
	I0923 12:42:47.640660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704"
	I0923 12:42:50.962971       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704"
	I0923 12:43:00.784748       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.407256ms"
	I0923 12:43:00.785244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="174.442µs"
	I0923 12:43:00.800941       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="59.202µs"
	I0923 12:43:00.834354       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="12.917606ms"
	I0923 12:43:00.834538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.569µs"
	I0923 12:43:10.851781       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-jthg2"
	I0923 12:43:10.882818       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-jthg2"
	I0923 12:43:10.883011       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lb8gc"
	I0923 12:43:10.905143       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lb8gc"
	I0923 12:43:10.976005       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m02"
	I0923 12:43:10.994978       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m02"
	I0923 12:43:11.025686       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.354128ms"
	I0923 12:43:11.027044       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="1.320323ms"
	I0923 12:43:16.123936       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-915704-m02"
	
	
	==> kube-proxy [c9814a876611] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 12:42:28.601056       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 12:42:28.628599       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.233"]
	E0923 12:42:28.628919       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:42:28.686562       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 12:42:28.686611       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 12:42:28.686635       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:42:28.689921       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:42:28.691860       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:42:28.692300       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:42:28.697596       1 config.go:199] "Starting service config controller"
	I0923 12:42:28.698186       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:42:28.699262       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:42:28.699290       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:42:28.700480       1 config.go:328] "Starting node config controller"
	I0923 12:42:28.700513       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:42:28.800050       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 12:42:28.800160       1 shared_informer.go:320] Caches are synced for service config
	I0923 12:42:28.801582       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e9ab80b3cbfc] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0923 12:39:34.608601       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0923 12:39:34.637846       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.233"]
	E0923 12:39:34.638576       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 12:39:34.683609       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0923 12:39:34.683868       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0923 12:39:34.683991       1 server_linux.go:169] "Using iptables Proxier"
	I0923 12:39:34.688205       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 12:39:34.688909       1 server.go:483] "Version info" version="v1.31.1"
	I0923 12:39:34.689080       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:39:34.691541       1 config.go:105] "Starting endpoint slice config controller"
	I0923 12:39:34.691915       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 12:39:34.691978       1 config.go:199] "Starting service config controller"
	I0923 12:39:34.691996       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 12:39:34.694142       1 config.go:328] "Starting node config controller"
	I0923 12:39:34.694289       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 12:39:34.793013       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 12:39:34.793098       1 shared_informer.go:320] Caches are synced for service config
	I0923 12:39:34.794530       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [80c39f229adc] <==
	I0923 12:39:30.758184       1 serving.go:386] Generated self-signed cert in-memory
	W0923 12:39:32.954388       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 12:39:32.954438       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 12:39:32.954448       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 12:39:32.954555       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 12:39:33.025112       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 12:39:33.025162       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:39:33.038711       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 12:39:33.039077       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 12:39:33.039306       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 12:39:33.039524       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 12:39:33.140059       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 12:41:41.871397       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0923 12:41:41.871892       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0923 12:41:41.872240       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [9fb5e3f2dbb6] <==
	I0923 12:42:25.421618       1 serving.go:386] Generated self-signed cert in-memory
	W0923 12:42:27.294064       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 12:42:27.294154       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 12:42:27.294164       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 12:42:27.294500       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 12:42:27.370810       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 12:42:27.370944       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 12:42:27.374222       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 12:42:27.374546       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 12:42:27.375147       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 12:42:27.376285       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 12:42:27.475020       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 12:42:38 multinode-915704 kubelet[1406]: E0923 12:42:38.548737    1406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-h6n4v" podUID="78bb2d3f-74bb-4ebe-a4ed-4a868066da48"
	Sep 23 12:42:38 multinode-915704 kubelet[1406]: E0923 12:42:38.549514    1406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-s5jv2" podUID="0dc645c9-049b-41b4-abb9-efb0c3496da5"
	Sep 23 12:42:40 multinode-915704 kubelet[1406]: E0923 12:42:40.549010    1406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-s5jv2" podUID="0dc645c9-049b-41b4-abb9-efb0c3496da5"
	Sep 23 12:42:40 multinode-915704 kubelet[1406]: E0923 12:42:40.549558    1406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-h6n4v" podUID="78bb2d3f-74bb-4ebe-a4ed-4a868066da48"
	Sep 23 12:42:42 multinode-915704 kubelet[1406]: E0923 12:42:42.550393    1406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-s5jv2" podUID="0dc645c9-049b-41b4-abb9-efb0c3496da5"
	Sep 23 12:42:42 multinode-915704 kubelet[1406]: E0923 12:42:42.550896    1406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-h6n4v" podUID="78bb2d3f-74bb-4ebe-a4ed-4a868066da48"
	Sep 23 12:42:43 multinode-915704 kubelet[1406]: E0923 12:42:43.161970    1406 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 23 12:42:43 multinode-915704 kubelet[1406]: E0923 12:42:43.162247    1406 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0dc645c9-049b-41b4-abb9-efb0c3496da5-config-volume podName:0dc645c9-049b-41b4-abb9-efb0c3496da5 nodeName:}" failed. No retries permitted until 2024-09-23 12:42:59.162230405 +0000 UTC m=+36.839269935 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0dc645c9-049b-41b4-abb9-efb0c3496da5-config-volume") pod "coredns-7c65d6cfc9-s5jv2" (UID: "0dc645c9-049b-41b4-abb9-efb0c3496da5") : object "kube-system"/"coredns" not registered
	Sep 23 12:42:43 multinode-915704 kubelet[1406]: E0923 12:42:43.262912    1406 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 23 12:42:43 multinode-915704 kubelet[1406]: E0923 12:42:43.263063    1406 projected.go:194] Error preparing data for projected volume kube-api-access-rdw8p for pod default/busybox-7dff88458-h6n4v: object "default"/"kube-root-ca.crt" not registered
	Sep 23 12:42:43 multinode-915704 kubelet[1406]: E0923 12:42:43.263327    1406 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78bb2d3f-74bb-4ebe-a4ed-4a868066da48-kube-api-access-rdw8p podName:78bb2d3f-74bb-4ebe-a4ed-4a868066da48 nodeName:}" failed. No retries permitted until 2024-09-23 12:42:59.263268496 +0000 UTC m=+36.940308037 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-rdw8p" (UniqueName: "kubernetes.io/projected/78bb2d3f-74bb-4ebe-a4ed-4a868066da48-kube-api-access-rdw8p") pod "busybox-7dff88458-h6n4v" (UID: "78bb2d3f-74bb-4ebe-a4ed-4a868066da48") : object "default"/"kube-root-ca.crt" not registered
	Sep 23 12:42:58 multinode-915704 kubelet[1406]: I0923 12:42:58.559658    1406 scope.go:117] "RemoveContainer" containerID="879b72b8d2590363ece7c0b9a6499331ad29ee7b29e8d89c65c9c406db8e0384"
	Sep 23 12:42:58 multinode-915704 kubelet[1406]: I0923 12:42:58.559995    1406 scope.go:117] "RemoveContainer" containerID="a7d098fca98a3f12ea4e1585d19853c2f768acd699ee55c0ba8c84df9cce3156"
	Sep 23 12:42:58 multinode-915704 kubelet[1406]: E0923 12:42:58.560192    1406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(ec90818c-184f-4066-a5c9-f4875d0b1354)\"" pod="kube-system/storage-provisioner" podUID="ec90818c-184f-4066-a5c9-f4875d0b1354"
	Sep 23 12:43:11 multinode-915704 kubelet[1406]: I0923 12:43:11.549583    1406 scope.go:117] "RemoveContainer" containerID="a7d098fca98a3f12ea4e1585d19853c2f768acd699ee55c0ba8c84df9cce3156"
	Sep 23 12:43:22 multinode-915704 kubelet[1406]: E0923 12:43:22.575139    1406 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 12:43:22 multinode-915704 kubelet[1406]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:43:22 multinode-915704 kubelet[1406]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:43:22 multinode-915704 kubelet[1406]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:43:22 multinode-915704 kubelet[1406]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 23 12:44:22 multinode-915704 kubelet[1406]: E0923 12:44:22.571954    1406 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 23 12:44:22 multinode-915704 kubelet[1406]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 23 12:44:22 multinode-915704 kubelet[1406]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 23 12:44:22 multinode-915704 kubelet[1406]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 23 12:44:22 multinode-915704 kubelet[1406]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-915704 -n multinode-915704
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-915704 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (154.65s)

                                                
                                    

Test pass (307/340)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.3
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 10.5
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.14
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.63
22 TestOffline 117.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 218.5
29 TestAddons/serial/Volcano 44.01
31 TestAddons/serial/GCPAuth/Namespaces 0.12
34 TestAddons/parallel/Ingress 20.88
35 TestAddons/parallel/InspektorGadget 11.67
36 TestAddons/parallel/MetricsServer 6.76
38 TestAddons/parallel/CSI 52.09
39 TestAddons/parallel/Headlamp 17.5
40 TestAddons/parallel/CloudSpanner 6.48
41 TestAddons/parallel/LocalPath 55.33
42 TestAddons/parallel/NvidiaDevicePlugin 6.46
43 TestAddons/parallel/Yakd 11.89
44 TestAddons/StoppedEnableDisable 13.6
45 TestCertOptions 103.74
46 TestCertExpiration 351.7
47 TestDockerFlags 51.63
48 TestForceSystemdFlag 102.51
49 TestForceSystemdEnv 77.74
51 TestKVMDriverInstallOrUpdate 4.48
55 TestErrorSpam/setup 47.44
56 TestErrorSpam/start 0.36
57 TestErrorSpam/status 0.77
58 TestErrorSpam/pause 1.21
59 TestErrorSpam/unpause 1.37
60 TestErrorSpam/stop 15.38
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 52.25
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 39.46
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.14
72 TestFunctional/serial/CacheCmd/cache/add_local 1.8
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
77 TestFunctional/serial/CacheCmd/cache/delete 0.1
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 42.63
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.03
83 TestFunctional/serial/LogsFileCmd 1.01
84 TestFunctional/serial/InvalidService 4.33
86 TestFunctional/parallel/ConfigCmd 0.35
87 TestFunctional/parallel/DashboardCmd 14.26
88 TestFunctional/parallel/DryRun 0.31
89 TestFunctional/parallel/InternationalLanguage 0.14
90 TestFunctional/parallel/StatusCmd 0.98
94 TestFunctional/parallel/ServiceCmdConnect 28.58
95 TestFunctional/parallel/AddonsCmd 0.31
96 TestFunctional/parallel/PersistentVolumeClaim 52.01
98 TestFunctional/parallel/SSHCmd 0.42
99 TestFunctional/parallel/CpCmd 1.31
100 TestFunctional/parallel/MySQL 29.71
101 TestFunctional/parallel/FileSync 0.2
102 TestFunctional/parallel/CertSync 1.31
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
110 TestFunctional/parallel/License 1.05
111 TestFunctional/parallel/Version/short 0.06
112 TestFunctional/parallel/Version/components 0.63
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.4
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
117 TestFunctional/parallel/ImageCommands/ImageBuild 4.43
118 TestFunctional/parallel/ImageCommands/Setup 1.86
119 TestFunctional/parallel/DockerEnv/bash 0.82
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
123 TestFunctional/parallel/ServiceCmd/DeployApp 29.47
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.07
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.74
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.59
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.41
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
140 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
141 TestFunctional/parallel/ProfileCmd/profile_list 0.34
142 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
143 TestFunctional/parallel/ServiceCmd/List 0.49
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
145 TestFunctional/parallel/MountCmd/any-port 8.65
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
147 TestFunctional/parallel/ServiceCmd/Format 0.3
148 TestFunctional/parallel/ServiceCmd/URL 0.31
149 TestFunctional/parallel/MountCmd/specific-port 1.59
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.27
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
154 TestGvisorAddon 266.85
157 TestMultiControlPlane/serial/StartCluster 325.05
158 TestMultiControlPlane/serial/DeployApp 7.04
159 TestMultiControlPlane/serial/PingHostFromPods 1.24
160 TestMultiControlPlane/serial/AddWorkerNode 65.95
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
163 TestMultiControlPlane/serial/CopyFile 13.01
164 TestMultiControlPlane/serial/StopSecondaryNode 13.98
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
166 TestMultiControlPlane/serial/RestartSecondaryNode 44.45
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 260.38
169 TestMultiControlPlane/serial/DeleteSecondaryNode 7.08
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
171 TestMultiControlPlane/serial/StopCluster 38.35
172 TestMultiControlPlane/serial/RestartCluster 161.77
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
174 TestMultiControlPlane/serial/AddSecondaryNode 80.02
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
178 TestImageBuild/serial/Setup 44.72
179 TestImageBuild/serial/NormalBuild 2.77
180 TestImageBuild/serial/BuildWithBuildArg 1.43
181 TestImageBuild/serial/BuildWithDockerIgnore 0.98
182 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.87
186 TestJSONOutput/start/Command 91.57
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 0.55
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 0.51
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 7.43
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 0.2
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 97.14
218 TestMountStart/serial/StartWithMountFirst 28.14
219 TestMountStart/serial/VerifyMountFirst 0.37
220 TestMountStart/serial/StartWithMountSecond 28.07
221 TestMountStart/serial/VerifyMountSecond 0.38
222 TestMountStart/serial/DeleteFirst 0.7
223 TestMountStart/serial/VerifyMountPostDelete 0.39
224 TestMountStart/serial/Stop 2.28
225 TestMountStart/serial/RestartStopped 26.89
226 TestMountStart/serial/VerifyMountPostStop 0.38
229 TestMultiNode/serial/FreshStart2Nodes 127.27
230 TestMultiNode/serial/DeployApp2Nodes 4.89
231 TestMultiNode/serial/PingHostFrom2Pods 0.82
232 TestMultiNode/serial/AddNode 57.71
233 TestMultiNode/serial/MultiNodeLabels 0.07
234 TestMultiNode/serial/ProfileList 0.6
235 TestMultiNode/serial/CopyFile 7.17
236 TestMultiNode/serial/StopNode 3.35
237 TestMultiNode/serial/StartAfterStop 41.98
238 TestMultiNode/serial/RestartKeepsNodes 175.63
239 TestMultiNode/serial/DeleteNode 2.34
240 TestMultiNode/serial/StopMultiNode 25
242 TestMultiNode/serial/ValidateNameConflict 48.47
247 TestPreload 193.79
249 TestScheduledStopUnix 118.12
250 TestSkaffold 128.2
253 TestRunningBinaryUpgrade 135.97
255 TestKubernetesUpgrade 167.38
268 TestStoppedBinaryUpgrade/Setup 2.31
269 TestStoppedBinaryUpgrade/Upgrade 161.47
271 TestPause/serial/Start 73.08
280 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
281 TestNoKubernetes/serial/StartWithK8s 86.03
282 TestNetworkPlugins/group/auto/Start 130.2
283 TestPause/serial/SecondStartNoReconfiguration 78.54
284 TestNoKubernetes/serial/StartWithStopK8s 30.62
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
286 TestNetworkPlugins/group/kindnet/Start 84.32
287 TestNoKubernetes/serial/Start 48.94
288 TestPause/serial/Pause 1.01
289 TestPause/serial/VerifyStatus 0.3
290 TestPause/serial/Unpause 0.66
291 TestPause/serial/PauseAgain 0.73
292 TestPause/serial/DeletePaused 1.1
293 TestPause/serial/VerifyDeletedResources 0.51
294 TestNetworkPlugins/group/calico/Start 96.13
295 TestNetworkPlugins/group/auto/KubeletFlags 0.23
296 TestNetworkPlugins/group/auto/NetCatPod 13.32
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
298 TestNoKubernetes/serial/ProfileList 1.84
299 TestNoKubernetes/serial/Stop 2.31
300 TestNoKubernetes/serial/StartNoArgs 44.62
301 TestNetworkPlugins/group/auto/DNS 0.16
302 TestNetworkPlugins/group/auto/Localhost 0.13
303 TestNetworkPlugins/group/auto/HairPin 0.13
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
306 TestNetworkPlugins/group/kindnet/NetCatPod 13.29
307 TestNetworkPlugins/group/custom-flannel/Start 92.93
308 TestNetworkPlugins/group/kindnet/DNS 0.16
309 TestNetworkPlugins/group/kindnet/Localhost 0.18
310 TestNetworkPlugins/group/kindnet/HairPin 0.15
311 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
312 TestNetworkPlugins/group/false/Start 120.44
313 TestNetworkPlugins/group/enable-default-cni/Start 108.6
314 TestNetworkPlugins/group/calico/ControllerPod 6.01
315 TestNetworkPlugins/group/calico/KubeletFlags 0.21
316 TestNetworkPlugins/group/calico/NetCatPod 12.29
317 TestNetworkPlugins/group/calico/DNS 0.16
318 TestNetworkPlugins/group/calico/Localhost 0.15
319 TestNetworkPlugins/group/calico/HairPin 0.13
320 TestNetworkPlugins/group/flannel/Start 88.79
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
323 TestNetworkPlugins/group/custom-flannel/DNS 0.21
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
326 TestNetworkPlugins/group/bridge/Start 78.28
327 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
328 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.85
329 TestNetworkPlugins/group/false/KubeletFlags 0.25
330 TestNetworkPlugins/group/false/NetCatPod 11.3
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
334 TestNetworkPlugins/group/false/DNS 0.19
335 TestNetworkPlugins/group/false/Localhost 0.14
336 TestNetworkPlugins/group/false/HairPin 0.13
337 TestNetworkPlugins/group/kubenet/Start 91.14
339 TestStartStop/group/old-k8s-version/serial/FirstStart 176.17
340 TestNetworkPlugins/group/flannel/ControllerPod 6.01
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
342 TestNetworkPlugins/group/flannel/NetCatPod 10.23
343 TestNetworkPlugins/group/flannel/DNS 0.19
344 TestNetworkPlugins/group/flannel/Localhost 0.14
345 TestNetworkPlugins/group/flannel/HairPin 0.14
346 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
347 TestNetworkPlugins/group/bridge/NetCatPod 14.3
349 TestStartStop/group/no-preload/serial/FirstStart 84.73
350 TestNetworkPlugins/group/bridge/DNS 0.16
351 TestNetworkPlugins/group/bridge/Localhost 0.14
352 TestNetworkPlugins/group/bridge/HairPin 0.16
354 TestStartStop/group/embed-certs/serial/FirstStart 85.86
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.21
356 TestNetworkPlugins/group/kubenet/NetCatPod 13.25
357 TestNetworkPlugins/group/kubenet/DNS 0.16
358 TestNetworkPlugins/group/kubenet/Localhost 0.14
359 TestNetworkPlugins/group/kubenet/HairPin 0.15
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 65.05
362 TestStartStop/group/no-preload/serial/DeployApp 10.39
363 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.07
364 TestStartStop/group/no-preload/serial/Stop 13.34
365 TestStartStop/group/embed-certs/serial/DeployApp 8.31
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
367 TestStartStop/group/no-preload/serial/SecondStart 307.77
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
369 TestStartStop/group/embed-certs/serial/Stop 13.41
370 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
371 TestStartStop/group/embed-certs/serial/SecondStart 389.45
372 TestStartStop/group/old-k8s-version/serial/DeployApp 10.57
373 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
374 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.01
375 TestStartStop/group/old-k8s-version/serial/Stop 13.35
376 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.89
377 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.34
378 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
379 TestStartStop/group/old-k8s-version/serial/SecondStart 399.41
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 313.64
382 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
384 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
385 TestStartStop/group/no-preload/serial/Pause 2.43
387 TestStartStop/group/newest-cni/serial/FirstStart 59.7
388 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
389 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
390 TestStartStop/group/newest-cni/serial/DeployApp 0
391 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.9
392 TestStartStop/group/newest-cni/serial/Stop 12.78
393 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
394 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.71
395 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
396 TestStartStop/group/newest-cni/serial/SecondStart 37.63
397 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 8.01
398 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
399 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
400 TestStartStop/group/embed-certs/serial/Pause 2.52
401 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
402 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
403 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
404 TestStartStop/group/newest-cni/serial/Pause 2.34
405 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
407 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
408 TestStartStop/group/old-k8s-version/serial/Pause 2.3
x
+
TestDownloadOnly/v1.20.0/json-events (22.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-535633 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-535633 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (22.303531048s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 11:52:58.948507  505012 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 11:52:58.948667  505012 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-535633
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-535633: exit status 85 (62.222435ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-535633 | jenkins | v1.34.0 | 23 Sep 24 11:52 UTC |          |
	|         | -p download-only-535633        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:52:36
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:52:36.685297  505024 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:52:36.685400  505024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:52:36.685408  505024 out.go:358] Setting ErrFile to fd 2...
	I0923 11:52:36.685412  505024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:52:36.685574  505024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	W0923 11:52:36.685704  505024 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19690-497735/.minikube/config/config.json: open /home/jenkins/minikube-integration/19690-497735/.minikube/config/config.json: no such file or directory
	I0923 11:52:36.686317  505024 out.go:352] Setting JSON to true
	I0923 11:52:36.687340  505024 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5699,"bootTime":1727086658,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:52:36.687453  505024 start.go:139] virtualization: kvm guest
	I0923 11:52:36.689881  505024 out.go:97] [download-only-535633] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 11:52:36.690047  505024 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 11:52:36.690125  505024 notify.go:220] Checking for updates...
	I0923 11:52:36.691424  505024 out.go:169] MINIKUBE_LOCATION=19690
	I0923 11:52:36.692837  505024 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:52:36.694276  505024 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 11:52:36.695652  505024 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	I0923 11:52:36.697177  505024 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 11:52:36.699634  505024 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 11:52:36.699872  505024 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:52:36.733119  505024 out.go:97] Using the kvm2 driver based on user configuration
	I0923 11:52:36.733162  505024 start.go:297] selected driver: kvm2
	I0923 11:52:36.733170  505024 start.go:901] validating driver "kvm2" against <nil>
	I0923 11:52:36.733559  505024 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:52:36.733682  505024 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-497735/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 11:52:36.749709  505024 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 11:52:36.749769  505024 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:52:36.750338  505024 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0923 11:52:36.750495  505024 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 11:52:36.750527  505024 cni.go:84] Creating CNI manager for ""
	I0923 11:52:36.750594  505024 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 11:52:36.750649  505024 start.go:340] cluster config:
	{Name:download-only-535633 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-535633 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:52:36.750863  505024 iso.go:125] acquiring lock: {Name:mkc30b88bda541d89938b3c13430927ceb85d23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:52:36.752965  505024 out.go:97] Downloading VM boot image ...
	I0923 11:52:36.753102  505024 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19690-497735/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0923 11:52:47.711839  505024 out.go:97] Starting "download-only-535633" primary control-plane node in "download-only-535633" cluster
	I0923 11:52:47.711867  505024 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 11:52:47.813476  505024 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0923 11:52:47.813560  505024 cache.go:56] Caching tarball of preloaded images
	I0923 11:52:47.813735  505024 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 11:52:47.815443  505024 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 11:52:47.815458  505024 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0923 11:52:47.914521  505024 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-535633 host does not exist
	  To start a cluster, run: "minikube start -p download-only-535633"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-535633
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (10.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-569713 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-569713 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=kvm2 : (10.495862479s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (10.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 11:53:09.780774  505012 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 11:53:09.780827  505012 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-569713
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-569713: exit status 85 (60.384531ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-535633 | jenkins | v1.34.0 | 23 Sep 24 11:52 UTC |                     |
	|         | -p download-only-535633        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 11:52 UTC | 23 Sep 24 11:52 UTC |
	| delete  | -p download-only-535633        | download-only-535633 | jenkins | v1.34.0 | 23 Sep 24 11:52 UTC | 23 Sep 24 11:52 UTC |
	| start   | -o=json --download-only        | download-only-569713 | jenkins | v1.34.0 | 23 Sep 24 11:52 UTC |                     |
	|         | -p download-only-569713        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:52:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:52:59.324094  505286 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:52:59.324345  505286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:52:59.324352  505286 out.go:358] Setting ErrFile to fd 2...
	I0923 11:52:59.324356  505286 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:52:59.324535  505286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	I0923 11:52:59.325116  505286 out.go:352] Setting JSON to true
	I0923 11:52:59.326060  505286 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5721,"bootTime":1727086658,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:52:59.326168  505286 start.go:139] virtualization: kvm guest
	I0923 11:52:59.328111  505286 out.go:97] [download-only-569713] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:52:59.328285  505286 notify.go:220] Checking for updates...
	I0923 11:52:59.329531  505286 out.go:169] MINIKUBE_LOCATION=19690
	I0923 11:52:59.330849  505286 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:52:59.331919  505286 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 11:52:59.333073  505286 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	I0923 11:52:59.334440  505286 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 11:52:59.337026  505286 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 11:52:59.337288  505286 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:52:59.371298  505286 out.go:97] Using the kvm2 driver based on user configuration
	I0923 11:52:59.371329  505286 start.go:297] selected driver: kvm2
	I0923 11:52:59.371335  505286 start.go:901] validating driver "kvm2" against <nil>
	I0923 11:52:59.371670  505286 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:52:59.371769  505286 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19690-497735/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0923 11:52:59.389209  505286 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0923 11:52:59.389272  505286 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:52:59.389805  505286 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0923 11:52:59.389981  505286 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 11:52:59.390015  505286 cni.go:84] Creating CNI manager for ""
	I0923 11:52:59.390066  505286 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:52:59.390075  505286 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 11:52:59.390144  505286 start.go:340] cluster config:
	{Name:download-only-569713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-569713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:52:59.390254  505286 iso.go:125] acquiring lock: {Name:mkc30b88bda541d89938b3c13430927ceb85d23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:52:59.391809  505286 out.go:97] Starting "download-only-569713" primary control-plane node in "download-only-569713" cluster
	I0923 11:52:59.391828  505286 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:52:59.840188  505286 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:52:59.840234  505286 cache.go:56] Caching tarball of preloaded images
	I0923 11:52:59.840425  505286 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:52:59.842141  505286 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 11:52:59.842172  505286 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 11:52:59.940712  505286 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 11:53:08.188134  505286 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 11:53:08.188252  505286 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19690-497735/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 11:53:08.929900  505286 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 11:53:08.930241  505286 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/download-only-569713/config.json ...
	I0923 11:53:08.930272  505286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/download-only-569713/config.json: {Name:mk10464713f3063fe4a1b97ff90e0351a7b67321 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:53:08.930436  505286 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:53:08.930584  505286 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19690-497735/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-569713 host does not exist
	  To start a cluster, run: "minikube start -p download-only-569713"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-569713
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 11:53:10.380865  505012 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-304120 --alsologtostderr --binary-mirror http://127.0.0.1:41975 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-304120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-304120
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (117.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-049082 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-049082 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m56.58795826s)
helpers_test.go:175: Cleaning up "offline-docker-049082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-049082
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-049082: (1.005150631s)
--- PASS: TestOffline (117.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-825629
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-825629: exit status 85 (53.830779ms)

                                                
                                                
-- stdout --
	* Profile "addons-825629" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-825629"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-825629
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-825629: exit status 85 (53.237832ms)

                                                
                                                
-- stdout --
	* Profile "addons-825629" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-825629"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (218.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-825629 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-825629 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns: (3m38.503761059s)
--- PASS: TestAddons/Setup (218.50s)

                                                
                                    
x
+
TestAddons/serial/Volcano (44.01s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 23.968813ms
addons_test.go:851: volcano-controller stabilized in 24.029129ms
addons_test.go:835: volcano-scheduler stabilized in 24.092606ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-4vj45" [955706b3-57ba-417a-9de0-b3239b36a6d7] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004917669s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-5cv7v" [178ada20-96c7-4f66-8618-5fc8497d3809] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003702751s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-lmmkd" [b900b2b7-5d5b-46d8-bfae-e0d81366fc34] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.004357221s
addons_test.go:870: (dbg) Run:  kubectl --context addons-825629 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-825629 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-825629 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [49d23c5a-700a-4e56-939b-7a48079b95f5] Pending
helpers_test.go:344: "test-job-nginx-0" [49d23c5a-700a-4e56-939b-7a48079b95f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [49d23c5a-700a-4e56-939b-7a48079b95f5] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.004391013s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-825629 addons disable volcano --alsologtostderr -v=1: (10.575509279s)
--- PASS: TestAddons/serial/Volcano (44.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-825629 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-825629 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-825629 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-825629 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-825629 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [504f1260-5af1-4d2b-8b62-cbc37cb3745a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [504f1260-5af1-4d2b-8b62-cbc37cb3745a] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004549508s
I0923 12:06:04.242657  505012 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-825629 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-825629 addons disable ingress-dns --alsologtostderr -v=1: (1.80587669s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-825629 addons disable ingress --alsologtostderr -v=1: (7.899929357s)
--- PASS: TestAddons/parallel/Ingress (20.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-v9trm" [e73f920e-d596-482d-9d20-b787baae49fd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004692007s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-825629
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-825629: (5.669068762s)
--- PASS: TestAddons/parallel/InspektorGadget (11.67s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 14.195114ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-p7599" [0ea45ea8-1de0-4424-9d69-1ceaa23c38c6] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003424705s
addons_test.go:413: (dbg) Run:  kubectl --context addons-825629 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 12:05:48.156017  505012 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 12:05:48.160433  505012 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 12:05:48.160459  505012 kapi.go:107] duration metric: took 4.46817ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.475386ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-825629 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-825629 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e2f4c529-48db-4f3f-a396-4fd4886e3389] Pending
helpers_test.go:344: "task-pv-pod" [e2f4c529-48db-4f3f-a396-4fd4886e3389] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e2f4c529-48db-4f3f-a396-4fd4886e3389] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.00454786s
addons_test.go:528: (dbg) Run:  kubectl --context addons-825629 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-825629 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-825629 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-825629 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-825629 delete pod task-pv-pod: (1.379426784s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-825629 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-825629 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-825629 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9655bd75-9188-43b6-860a-21a008c02583] Pending
helpers_test.go:344: "task-pv-pod-restore" [9655bd75-9188-43b6-860a-21a008c02583] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9655bd75-9188-43b6-860a-21a008c02583] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004576931s
addons_test.go:570: (dbg) Run:  kubectl --context addons-825629 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-825629 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-825629 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-825629 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.671979394s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-825629 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-67w56" [36f374fb-b07b-42ee-bccd-5d90e2a7f1a7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-67w56" [36f374fb-b07b-42ee-bccd-5d90e2a7f1a7] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004651449s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-825629 addons disable headlamp --alsologtostderr -v=1: (5.676102163s)
--- PASS: TestAddons/parallel/Headlamp (17.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-j7bqw" [ad1e786f-9277-4c90-a7b9-1b8c96a30df6] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003571553s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-825629
--- PASS: TestAddons/parallel/CloudSpanner (6.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.33s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-825629 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-825629 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-825629 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b3c1ffdb-5a03-46a5-8711-dab3034be92b] Pending
helpers_test.go:344: "test-local-path" [b3c1ffdb-5a03-46a5-8711-dab3034be92b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b3c1ffdb-5a03-46a5-8711-dab3034be92b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b3c1ffdb-5a03-46a5-8711-dab3034be92b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.007164012s
addons_test.go:938: (dbg) Run:  kubectl --context addons-825629 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 ssh "cat /opt/local-path-provisioner/pvc-2838b3b1-f740-472d-b374-6eb70574df74_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-825629 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-825629 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-825629 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.435782422s)
--- PASS: TestAddons/parallel/LocalPath (55.33s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hm4kd" [ad43bcf9-fa90-4052-85cb-9b74ad1d5716] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.010967467s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-825629
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-5n85d" [698dfd9b-284d-44e0-b0da-96a979c72ea6] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.010387788s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-825629 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-825629 addons disable yakd --alsologtostderr -v=1: (5.875928281s)
--- PASS: TestAddons/parallel/Yakd (11.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-825629
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-825629: (13.310624191s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-825629
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-825629
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-825629
--- PASS: TestAddons/StoppedEnableDisable (13.60s)

                                                
                                    
x
+
TestCertOptions (103.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-281010 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-281010 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m41.676053382s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-281010 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-281010 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-281010 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-281010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-281010
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-281010: (1.513439663s)
--- PASS: TestCertOptions (103.74s)

                                                
                                    
x
+
TestCertExpiration (351.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-258637 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-258637 --memory=2048 --cert-expiration=3m --driver=kvm2 : (2m5.494195493s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-258637 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-258637 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (45.138776718s)
helpers_test.go:175: Cleaning up "cert-expiration-258637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-258637
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-258637: (1.064166472s)
--- PASS: TestCertExpiration (351.70s)

                                                
                                    
x
+
TestDockerFlags (51.63s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-316380 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-316380 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (50.108709809s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-316380 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-316380 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-316380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-316380
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-316380: (1.048088507s)
--- PASS: TestDockerFlags (51.63s)

                                                
                                    
x
+
TestForceSystemdFlag (102.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-115581 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-115581 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m41.460443144s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-115581 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-115581" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-115581
--- PASS: TestForceSystemdFlag (102.51s)

                                                
                                    
x
+
TestForceSystemdEnv (77.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-804246 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-804246 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m16.262411668s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-804246 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-804246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-804246
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-804246: (1.204328647s)
--- PASS: TestForceSystemdEnv (77.74s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.48s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0923 12:59:05.332108  505012 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 12:59:05.332249  505012 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0923 12:59:05.369238  505012 install.go:62] docker-machine-driver-kvm2: exit status 1
W0923 12:59:05.369567  505012 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0923 12:59:05.369650  505012 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2761613922/001/docker-machine-driver-kvm2
I0923 12:59:05.613902  505012 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2761613922/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc00067bd40 gz:0xc00067bd48 tar:0xc00067bcb0 tar.bz2:0xc00067bcd0 tar.gz:0xc00067bce0 tar.xz:0xc00067bcf0 tar.zst:0xc00067bd30 tbz2:0xc00067bcd0 tgz:0xc00067bce0 txz:0xc00067bcf0 tzst:0xc00067bd30 xz:0xc00067bd90 zip:0xc00067bda0 zst:0xc00067bd98] Getters:map[file:0xc001c49580 http:0xc000576f00 https:0xc000576f50] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0923 12:59:05.613978  505012 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2761613922/001/docker-machine-driver-kvm2
I0923 12:59:08.095353  505012 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 12:59:08.095473  505012 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0923 12:59:08.124224  505012 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0923 12:59:08.124267  505012 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0923 12:59:08.124349  505012 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0923 12:59:08.124396  505012 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2761613922/002/docker-machine-driver-kvm2
I0923 12:59:08.151022  505012 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2761613922/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc00067bd40 gz:0xc00067bd48 tar:0xc00067bcb0 tar.bz2:0xc00067bcd0 tar.gz:0xc00067bce0 tar.xz:0xc00067bcf0 tar.zst:0xc00067bd30 tbz2:0xc00067bcd0 tgz:0xc00067bce0 txz:0xc00067bcf0 tzst:0xc00067bd30 xz:0xc00067bd90 zip:0xc00067bda0 zst:0xc00067bd98] Getters:map[file:0xc000a6ca30 http:0xc000870460 https:0xc0008704b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0923 12:59:08.151076  505012 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2761613922/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.48s)

                                                
                                    
x
+
TestErrorSpam/setup (47.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-819814 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-819814 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-819814 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-819814 --driver=kvm2 : (47.43950823s)
--- PASS: TestErrorSpam/setup (47.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 pause
--- PASS: TestErrorSpam/pause (1.21s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 unpause
--- PASS: TestErrorSpam/unpause (1.37s)

                                                
                                    
x
+
TestErrorSpam/stop (15.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 stop: (12.500498381s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 stop: (1.726374619s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-819814 --log_dir /tmp/nospam-819814 stop: (1.150041623s)
--- PASS: TestErrorSpam/stop (15.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19690-497735/.minikube/files/etc/test/nested/copy/505012/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-544435 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-544435 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (52.250079599s)
--- PASS: TestFunctional/serial/StartWithProxy (52.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 12:09:10.126265  505012 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-544435 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-544435 --alsologtostderr -v=8: (39.454869905s)
functional_test.go:663: soft start took 39.455762158s for "functional-544435" cluster.
I0923 12:09:49.581537  505012 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (39.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-544435 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-544435 cache add registry.k8s.io/pause:3.1: (1.517496731s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-544435 cache add registry.k8s.io/pause:3.3: (1.312469782s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-544435 cache add registry.k8s.io/pause:latest: (1.308938967s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-544435 /tmp/TestFunctionalserialCacheCmdcacheadd_local1191921503/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 cache add minikube-local-cache-test:functional-544435
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-544435 cache add minikube-local-cache-test:functional-544435: (1.442062848s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 cache delete minikube-local-cache-test:functional-544435
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-544435
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-544435 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (222.628016ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 kubectl -- --context functional-544435 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-544435 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.63s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-544435 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-544435 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.631651667s)
functional_test.go:761: restart took 42.631776736s for "functional-544435" cluster.
I0923 12:10:40.571470  505012 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (42.63s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-544435 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-544435 logs: (1.030369622s)
--- PASS: TestFunctional/serial/LogsCmd (1.03s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 logs --file /tmp/TestFunctionalserialLogsFileCmd3432806042/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-544435 logs --file /tmp/TestFunctionalserialLogsFileCmd3432806042/001/logs.txt: (1.012185498s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.01s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-544435 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-544435
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-544435: exit status 115 (278.308389ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.197:31609 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-544435 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-544435 config get cpus: exit status 14 (57.007941ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-544435 config get cpus: exit status 14 (50.846834ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-544435 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-544435 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 514837: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-544435 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-544435 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (155.514566ms)

                                                
                                                
-- stdout --
	* [functional-544435] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:11:19.713654  514678 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:11:19.713856  514678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:11:19.713867  514678 out.go:358] Setting ErrFile to fd 2...
	I0923 12:11:19.713873  514678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:11:19.714132  514678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	I0923 12:11:19.714824  514678 out.go:352] Setting JSON to false
	I0923 12:11:19.716567  514678 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6822,"bootTime":1727086658,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:11:19.716706  514678 start.go:139] virtualization: kvm guest
	I0923 12:11:19.719897  514678 out.go:177] * [functional-544435] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 12:11:19.722622  514678 notify.go:220] Checking for updates...
	I0923 12:11:19.722691  514678 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:11:19.724767  514678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:11:19.727138  514678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 12:11:19.728945  514678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	I0923 12:11:19.730777  514678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:11:19.732952  514678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:11:19.735210  514678 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:11:19.735728  514678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:11:19.735810  514678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:11:19.752302  514678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39535
	I0923 12:11:19.752848  514678 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:11:19.753555  514678 main.go:141] libmachine: Using API Version  1
	I0923 12:11:19.753584  514678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:11:19.753944  514678 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:11:19.754219  514678 main.go:141] libmachine: (functional-544435) Calling .DriverName
	I0923 12:11:19.754517  514678 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:11:19.754859  514678 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:11:19.754917  514678 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:11:19.772417  514678 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45187
	I0923 12:11:19.772927  514678 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:11:19.773608  514678 main.go:141] libmachine: Using API Version  1
	I0923 12:11:19.773655  514678 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:11:19.773965  514678 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:11:19.774199  514678 main.go:141] libmachine: (functional-544435) Calling .DriverName
	I0923 12:11:19.810258  514678 out.go:177] * Using the kvm2 driver based on existing profile
	I0923 12:11:19.811763  514678 start.go:297] selected driver: kvm2
	I0923 12:11:19.811780  514678 start.go:901] validating driver "kvm2" against &{Name:functional-544435 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-544435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.197 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:11:19.811912  514678 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:11:19.814468  514678 out.go:201] 
	W0923 12:11:19.816372  514678 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 12:11:19.818121  514678 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-544435 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-544435 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-544435 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (144.104551ms)

                                                
                                                
-- stdout --
	* [functional-544435] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:11:25.541936  514918 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:11:25.542057  514918 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:11:25.542068  514918 out.go:358] Setting ErrFile to fd 2...
	I0923 12:11:25.542079  514918 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:11:25.542381  514918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	I0923 12:11:25.542965  514918 out.go:352] Setting JSON to false
	I0923 12:11:25.543982  514918 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6827,"bootTime":1727086658,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:11:25.544091  514918 start.go:139] virtualization: kvm guest
	I0923 12:11:25.546361  514918 out.go:177] * [functional-544435] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0923 12:11:25.547749  514918 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:11:25.547752  514918 notify.go:220] Checking for updates...
	I0923 12:11:25.550153  514918 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:11:25.551597  514918 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	I0923 12:11:25.553059  514918 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	I0923 12:11:25.554313  514918 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:11:25.555430  514918 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:11:25.557415  514918 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:11:25.558054  514918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:11:25.558134  514918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:11:25.574122  514918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41921
	I0923 12:11:25.574706  514918 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:11:25.575311  514918 main.go:141] libmachine: Using API Version  1
	I0923 12:11:25.575344  514918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:11:25.575782  514918 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:11:25.575994  514918 main.go:141] libmachine: (functional-544435) Calling .DriverName
	I0923 12:11:25.576262  514918 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:11:25.576567  514918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:11:25.576604  514918 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:11:25.592385  514918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46513
	I0923 12:11:25.592963  514918 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:11:25.593591  514918 main.go:141] libmachine: Using API Version  1
	I0923 12:11:25.593619  514918 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:11:25.594034  514918 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:11:25.594257  514918 main.go:141] libmachine: (functional-544435) Calling .DriverName
	I0923 12:11:25.628690  514918 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0923 12:11:25.630256  514918 start.go:297] selected driver: kvm2
	I0923 12:11:25.630284  514918 start.go:901] validating driver "kvm2" against &{Name:functional-544435 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-544435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.197 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpir
ation:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:11:25.630446  514918 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:11:25.633144  514918 out.go:201] 
	W0923 12:11:25.634585  514918 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 12:11:25.635890  514918 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-544435 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-544435 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hzfmm" [116678d1-a25d-4c02-a252-427bb7da3eed] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hzfmm" [116678d1-a25d-4c02-a252-427bb7da3eed] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 28.005540963s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.197:32191
functional_test.go:1675: http://192.168.39.197:32191: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-hzfmm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.197:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.197:32191
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (52.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0f8e82b9-1701-478f-83db-d7cd2970fa87] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00418967s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-544435 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-544435 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-544435 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-544435 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bca51fd3-b42c-4681-b02c-597ba3621582] Pending
helpers_test.go:344: "sp-pod" [bca51fd3-b42c-4681-b02c-597ba3621582] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bca51fd3-b42c-4681-b02c-597ba3621582] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.003333831s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-544435 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-544435 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-544435 delete -f testdata/storage-provisioner/pod.yaml: (1.103739305s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-544435 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0156b618-7965-4182-af55-12c75ebaca65] Pending
helpers_test.go:344: "sp-pod" [0156b618-7965-4182-af55-12c75ebaca65] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0156b618-7965-4182-af55-12c75ebaca65] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004244549s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-544435 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (52.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh -n functional-544435 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 cp functional-544435:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2773163498/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh -n functional-544435 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh -n functional-544435 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (29.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-544435 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-k7zpn" [7c883da5-57e0-4e0e-8603-cfb41f3d96d5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-k7zpn" [7c883da5-57e0-4e0e-8603-cfb41f3d96d5] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.004922323s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-544435 exec mysql-6cdb49bbb-k7zpn -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-544435 exec mysql-6cdb49bbb-k7zpn -- mysql -ppassword -e "show databases;": exit status 1 (181.537939ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:11:13.336697  505012 retry.go:31] will retry after 1.27739588s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-544435 exec mysql-6cdb49bbb-k7zpn -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-544435 exec mysql-6cdb49bbb-k7zpn -- mysql -ppassword -e "show databases;": exit status 1 (304.364431ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:11:14.919453  505012 retry.go:31] will retry after 1.406066738s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-544435 exec mysql-6cdb49bbb-k7zpn -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-544435 exec mysql-6cdb49bbb-k7zpn -- mysql -ppassword -e "show databases;": exit status 1 (237.807389ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:11:16.564318  505012 retry.go:31] will retry after 1.886360903s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-544435 exec mysql-6cdb49bbb-k7zpn -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (29.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/505012/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "sudo cat /etc/test/nested/copy/505012/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/505012.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "sudo cat /etc/ssl/certs/505012.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/505012.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "sudo cat /usr/share/ca-certificates/505012.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5050122.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "sudo cat /etc/ssl/certs/5050122.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5050122.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "sudo cat /usr/share/ca-certificates/5050122.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-544435 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-544435 ssh "sudo systemctl is-active crio": exit status 1 (242.06975ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2288: (dbg) Done: out/minikube-linux-amd64 license: (1.049241545s)
--- PASS: TestFunctional/parallel/License (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-544435 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-544435
docker.io/kicbase/echo-server:functional-544435
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-544435 image ls --format short --alsologtostderr:
I0923 12:11:26.716606  515130 out.go:345] Setting OutFile to fd 1 ...
I0923 12:11:26.716940  515130 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:11:26.716954  515130 out.go:358] Setting ErrFile to fd 2...
I0923 12:11:26.716962  515130 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:11:26.717242  515130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
I0923 12:11:26.718256  515130 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:11:26.718431  515130 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:11:26.719103  515130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0923 12:11:26.719168  515130 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:11:26.738710  515130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39175
I0923 12:11:26.739427  515130 main.go:141] libmachine: () Calling .GetVersion
I0923 12:11:26.740235  515130 main.go:141] libmachine: Using API Version  1
I0923 12:11:26.740273  515130 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:11:26.740882  515130 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:11:26.741161  515130 main.go:141] libmachine: (functional-544435) Calling .GetState
I0923 12:11:26.744411  515130 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0923 12:11:26.744479  515130 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:11:26.762093  515130 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42583
I0923 12:11:26.762547  515130 main.go:141] libmachine: () Calling .GetVersion
I0923 12:11:26.763107  515130 main.go:141] libmachine: Using API Version  1
I0923 12:11:26.763130  515130 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:11:26.763552  515130 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:11:26.763729  515130 main.go:141] libmachine: (functional-544435) Calling .DriverName
I0923 12:11:26.763937  515130 ssh_runner.go:195] Run: systemctl --version
I0923 12:11:26.763992  515130 main.go:141] libmachine: (functional-544435) Calling .GetSSHHostname
I0923 12:11:26.767126  515130 main.go:141] libmachine: (functional-544435) DBG | domain functional-544435 has defined MAC address 52:54:00:a9:44:2c in network mk-functional-544435
I0923 12:11:26.767357  515130 main.go:141] libmachine: (functional-544435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:44:2c", ip: ""} in network mk-functional-544435: {Iface:virbr1 ExpiryTime:2024-09-23 13:08:32 +0000 UTC Type:0 Mac:52:54:00:a9:44:2c Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:functional-544435 Clientid:01:52:54:00:a9:44:2c}
I0923 12:11:26.767385  515130 main.go:141] libmachine: (functional-544435) DBG | domain functional-544435 has defined IP address 192.168.39.197 and MAC address 52:54:00:a9:44:2c in network mk-functional-544435
I0923 12:11:26.767633  515130 main.go:141] libmachine: (functional-544435) Calling .GetSSHPort
I0923 12:11:26.767840  515130 main.go:141] libmachine: (functional-544435) Calling .GetSSHKeyPath
I0923 12:11:26.768006  515130 main.go:141] libmachine: (functional-544435) Calling .GetSSHUsername
I0923 12:11:26.768119  515130 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/functional-544435/id_rsa Username:docker}
I0923 12:11:26.866472  515130 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0923 12:11:26.919096  515130 main.go:141] libmachine: Making call to close driver server
I0923 12:11:26.919128  515130 main.go:141] libmachine: (functional-544435) Calling .Close
I0923 12:11:26.919451  515130 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:11:26.919469  515130 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:11:26.919479  515130 main.go:141] libmachine: Making call to close driver server
I0923 12:11:26.919488  515130 main.go:141] libmachine: (functional-544435) Calling .Close
I0923 12:11:26.919780  515130 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:11:26.919799  515130 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-544435 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kicbase/echo-server               | functional-544435 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-544435 | 2d0014f8da5e0 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-544435 image ls --format table --alsologtostderr:
I0923 12:11:30.469616  515632 out.go:345] Setting OutFile to fd 1 ...
I0923 12:11:30.469729  515632 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:11:30.469737  515632 out.go:358] Setting ErrFile to fd 2...
I0923 12:11:30.469744  515632 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:11:30.469948  515632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
I0923 12:11:30.470589  515632 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:11:30.470701  515632 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:11:30.471112  515632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0923 12:11:30.471162  515632 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:11:30.487796  515632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39703
I0923 12:11:30.488400  515632 main.go:141] libmachine: () Calling .GetVersion
I0923 12:11:30.489007  515632 main.go:141] libmachine: Using API Version  1
I0923 12:11:30.489035  515632 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:11:30.489467  515632 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:11:30.489679  515632 main.go:141] libmachine: (functional-544435) Calling .GetState
I0923 12:11:30.491614  515632 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0923 12:11:30.491665  515632 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:11:30.508129  515632 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
I0923 12:11:30.508542  515632 main.go:141] libmachine: () Calling .GetVersion
I0923 12:11:30.509126  515632 main.go:141] libmachine: Using API Version  1
I0923 12:11:30.509150  515632 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:11:30.509475  515632 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:11:30.509689  515632 main.go:141] libmachine: (functional-544435) Calling .DriverName
I0923 12:11:30.509904  515632 ssh_runner.go:195] Run: systemctl --version
I0923 12:11:30.509954  515632 main.go:141] libmachine: (functional-544435) Calling .GetSSHHostname
I0923 12:11:30.512670  515632 main.go:141] libmachine: (functional-544435) DBG | domain functional-544435 has defined MAC address 52:54:00:a9:44:2c in network mk-functional-544435
I0923 12:11:30.513072  515632 main.go:141] libmachine: (functional-544435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:44:2c", ip: ""} in network mk-functional-544435: {Iface:virbr1 ExpiryTime:2024-09-23 13:08:32 +0000 UTC Type:0 Mac:52:54:00:a9:44:2c Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:functional-544435 Clientid:01:52:54:00:a9:44:2c}
I0923 12:11:30.513109  515632 main.go:141] libmachine: (functional-544435) DBG | domain functional-544435 has defined IP address 192.168.39.197 and MAC address 52:54:00:a9:44:2c in network mk-functional-544435
I0923 12:11:30.513198  515632 main.go:141] libmachine: (functional-544435) Calling .GetSSHPort
I0923 12:11:30.513408  515632 main.go:141] libmachine: (functional-544435) Calling .GetSSHKeyPath
I0923 12:11:30.513596  515632 main.go:141] libmachine: (functional-544435) Calling .GetSSHUsername
I0923 12:11:30.513753  515632 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/functional-544435/id_rsa Username:docker}
I0923 12:11:30.636076  515632 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0923 12:11:30.693148  515632 main.go:141] libmachine: Making call to close driver server
I0923 12:11:30.693170  515632 main.go:141] libmachine: (functional-544435) Calling .Close
I0923 12:11:30.693587  515632 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:11:30.693612  515632 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:11:30.693616  515632 main.go:141] libmachine: (functional-544435) DBG | Closing plugin on server side
I0923 12:11:30.693626  515632 main.go:141] libmachine: Making call to close driver server
I0923 12:11:30.693634  515632 main.go:141] libmachine: (functional-544435) Calling .Close
I0923 12:11:30.693958  515632 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:11:30.693974  515632 main.go:141] libmachine: (functional-544435) DBG | Closing plugin on server side
I0923 12:11:30.693983  515632 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-544435 image ls --format json --alsologtostderr:
[{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"2d0014f8da5e0e099e3ef2dd70465bb3a9512dc53558e52cbeb784495ad3b90f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-544435"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f
07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-544435"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags"
:["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-544435 image ls --format json --alsologtostderr:
I0923 12:11:30.208603  515592 out.go:345] Setting OutFile to fd 1 ...
I0923 12:11:30.208735  515592 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:11:30.208746  515592 out.go:358] Setting ErrFile to fd 2...
I0923 12:11:30.208753  515592 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:11:30.208928  515592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
I0923 12:11:30.209615  515592 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:11:30.209737  515592 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:11:30.210158  515592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0923 12:11:30.210214  515592 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:11:30.227665  515592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
I0923 12:11:30.228278  515592 main.go:141] libmachine: () Calling .GetVersion
I0923 12:11:30.229004  515592 main.go:141] libmachine: Using API Version  1
I0923 12:11:30.229052  515592 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:11:30.229437  515592 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:11:30.229652  515592 main.go:141] libmachine: (functional-544435) Calling .GetState
I0923 12:11:30.231698  515592 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0923 12:11:30.231752  515592 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:11:30.250619  515592 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35045
I0923 12:11:30.251173  515592 main.go:141] libmachine: () Calling .GetVersion
I0923 12:11:30.251740  515592 main.go:141] libmachine: Using API Version  1
I0923 12:11:30.251766  515592 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:11:30.252202  515592 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:11:30.252421  515592 main.go:141] libmachine: (functional-544435) Calling .DriverName
I0923 12:11:30.252687  515592 ssh_runner.go:195] Run: systemctl --version
I0923 12:11:30.252727  515592 main.go:141] libmachine: (functional-544435) Calling .GetSSHHostname
I0923 12:11:30.255652  515592 main.go:141] libmachine: (functional-544435) DBG | domain functional-544435 has defined MAC address 52:54:00:a9:44:2c in network mk-functional-544435
I0923 12:11:30.256146  515592 main.go:141] libmachine: (functional-544435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:44:2c", ip: ""} in network mk-functional-544435: {Iface:virbr1 ExpiryTime:2024-09-23 13:08:32 +0000 UTC Type:0 Mac:52:54:00:a9:44:2c Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:functional-544435 Clientid:01:52:54:00:a9:44:2c}
I0923 12:11:30.256237  515592 main.go:141] libmachine: (functional-544435) DBG | domain functional-544435 has defined IP address 192.168.39.197 and MAC address 52:54:00:a9:44:2c in network mk-functional-544435
I0923 12:11:30.256377  515592 main.go:141] libmachine: (functional-544435) Calling .GetSSHPort
I0923 12:11:30.256594  515592 main.go:141] libmachine: (functional-544435) Calling .GetSSHKeyPath
I0923 12:11:30.256752  515592 main.go:141] libmachine: (functional-544435) Calling .GetSSHUsername
I0923 12:11:30.256881  515592 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/functional-544435/id_rsa Username:docker}
I0923 12:11:30.363488  515592 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0923 12:11:30.414557  515592 main.go:141] libmachine: Making call to close driver server
I0923 12:11:30.414570  515592 main.go:141] libmachine: (functional-544435) Calling .Close
I0923 12:11:30.414900  515592 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:11:30.414929  515592 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:11:30.414937  515592 main.go:141] libmachine: Making call to close driver server
I0923 12:11:30.414945  515592 main.go:141] libmachine: (functional-544435) Calling .Close
I0923 12:11:30.415226  515592 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:11:30.415247  515592 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-544435 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-544435
size: "4940000"
- id: 2d0014f8da5e0e099e3ef2dd70465bb3a9512dc53558e52cbeb784495ad3b90f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-544435
size: "30"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-544435 image ls --format yaml --alsologtostderr:
I0923 12:11:26.975153  515183 out.go:345] Setting OutFile to fd 1 ...
I0923 12:11:26.975293  515183 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:11:26.975303  515183 out.go:358] Setting ErrFile to fd 2...
I0923 12:11:26.975307  515183 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:11:26.975494  515183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
I0923 12:11:26.976140  515183 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:11:26.976258  515183 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:11:26.976666  515183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0923 12:11:26.976720  515183 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:11:26.992831  515183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43393
I0923 12:11:26.993408  515183 main.go:141] libmachine: () Calling .GetVersion
I0923 12:11:26.994078  515183 main.go:141] libmachine: Using API Version  1
I0923 12:11:26.994109  515183 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:11:26.994501  515183 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:11:26.994711  515183 main.go:141] libmachine: (functional-544435) Calling .GetState
I0923 12:11:26.996735  515183 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0923 12:11:26.996802  515183 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:11:27.012437  515183 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33269
I0923 12:11:27.012975  515183 main.go:141] libmachine: () Calling .GetVersion
I0923 12:11:27.013518  515183 main.go:141] libmachine: Using API Version  1
I0923 12:11:27.013548  515183 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:11:27.013988  515183 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:11:27.014176  515183 main.go:141] libmachine: (functional-544435) Calling .DriverName
I0923 12:11:27.014413  515183 ssh_runner.go:195] Run: systemctl --version
I0923 12:11:27.014445  515183 main.go:141] libmachine: (functional-544435) Calling .GetSSHHostname
I0923 12:11:27.017276  515183 main.go:141] libmachine: (functional-544435) DBG | domain functional-544435 has defined MAC address 52:54:00:a9:44:2c in network mk-functional-544435
I0923 12:11:27.017722  515183 main.go:141] libmachine: (functional-544435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:44:2c", ip: ""} in network mk-functional-544435: {Iface:virbr1 ExpiryTime:2024-09-23 13:08:32 +0000 UTC Type:0 Mac:52:54:00:a9:44:2c Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:functional-544435 Clientid:01:52:54:00:a9:44:2c}
I0923 12:11:27.017751  515183 main.go:141] libmachine: (functional-544435) DBG | domain functional-544435 has defined IP address 192.168.39.197 and MAC address 52:54:00:a9:44:2c in network mk-functional-544435
I0923 12:11:27.017883  515183 main.go:141] libmachine: (functional-544435) Calling .GetSSHPort
I0923 12:11:27.018073  515183 main.go:141] libmachine: (functional-544435) Calling .GetSSHKeyPath
I0923 12:11:27.018344  515183 main.go:141] libmachine: (functional-544435) Calling .GetSSHUsername
I0923 12:11:27.018509  515183 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/functional-544435/id_rsa Username:docker}
I0923 12:11:27.106194  515183 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0923 12:11:27.140098  515183 main.go:141] libmachine: Making call to close driver server
I0923 12:11:27.140121  515183 main.go:141] libmachine: (functional-544435) Calling .Close
I0923 12:11:27.140463  515183 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:11:27.140494  515183 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:11:27.140505  515183 main.go:141] libmachine: Making call to close driver server
I0923 12:11:27.140513  515183 main.go:141] libmachine: (functional-544435) Calling .Close
I0923 12:11:27.140865  515183 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:11:27.140893  515183 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:11:27.140909  515183 main.go:141] libmachine: (functional-544435) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-544435 ssh pgrep buildkitd: exit status 1 (213.003251ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image build -t localhost/my-image:functional-544435 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-544435 image build -t localhost/my-image:functional-544435 testdata/build --alsologtostderr: (4.023833201s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-544435 image build -t localhost/my-image:functional-544435 testdata/build --alsologtostderr:
I0923 12:11:27.421731  515262 out.go:345] Setting OutFile to fd 1 ...
I0923 12:11:27.422033  515262 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:11:27.422045  515262 out.go:358] Setting ErrFile to fd 2...
I0923 12:11:27.422049  515262 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 12:11:27.422217  515262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
I0923 12:11:27.422865  515262 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:11:27.423648  515262 config.go:182] Loaded profile config "functional-544435": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 12:11:27.424262  515262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0923 12:11:27.424326  515262 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:11:27.441053  515262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33401
I0923 12:11:27.441508  515262 main.go:141] libmachine: () Calling .GetVersion
I0923 12:11:27.442198  515262 main.go:141] libmachine: Using API Version  1
I0923 12:11:27.442239  515262 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:11:27.442616  515262 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:11:27.442860  515262 main.go:141] libmachine: (functional-544435) Calling .GetState
I0923 12:11:27.444687  515262 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0923 12:11:27.444724  515262 main.go:141] libmachine: Launching plugin server for driver kvm2
I0923 12:11:27.461144  515262 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41531
I0923 12:11:27.461764  515262 main.go:141] libmachine: () Calling .GetVersion
I0923 12:11:27.462286  515262 main.go:141] libmachine: Using API Version  1
I0923 12:11:27.462308  515262 main.go:141] libmachine: () Calling .SetConfigRaw
I0923 12:11:27.462757  515262 main.go:141] libmachine: () Calling .GetMachineName
I0923 12:11:27.462937  515262 main.go:141] libmachine: (functional-544435) Calling .DriverName
I0923 12:11:27.463190  515262 ssh_runner.go:195] Run: systemctl --version
I0923 12:11:27.463219  515262 main.go:141] libmachine: (functional-544435) Calling .GetSSHHostname
I0923 12:11:27.466289  515262 main.go:141] libmachine: (functional-544435) DBG | domain functional-544435 has defined MAC address 52:54:00:a9:44:2c in network mk-functional-544435
I0923 12:11:27.466878  515262 main.go:141] libmachine: (functional-544435) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a9:44:2c", ip: ""} in network mk-functional-544435: {Iface:virbr1 ExpiryTime:2024-09-23 13:08:32 +0000 UTC Type:0 Mac:52:54:00:a9:44:2c Iaid: IPaddr:192.168.39.197 Prefix:24 Hostname:functional-544435 Clientid:01:52:54:00:a9:44:2c}
I0923 12:11:27.466914  515262 main.go:141] libmachine: (functional-544435) DBG | domain functional-544435 has defined IP address 192.168.39.197 and MAC address 52:54:00:a9:44:2c in network mk-functional-544435
I0923 12:11:27.467023  515262 main.go:141] libmachine: (functional-544435) Calling .GetSSHPort
I0923 12:11:27.467288  515262 main.go:141] libmachine: (functional-544435) Calling .GetSSHKeyPath
I0923 12:11:27.467420  515262 main.go:141] libmachine: (functional-544435) Calling .GetSSHUsername
I0923 12:11:27.467564  515262 sshutil.go:53] new ssh client: &{IP:192.168.39.197 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/functional-544435/id_rsa Username:docker}
I0923 12:11:27.573151  515262 build_images.go:161] Building image from path: /tmp/build.3191386578.tar
I0923 12:11:27.573218  515262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 12:11:27.588639  515262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3191386578.tar
I0923 12:11:27.593194  515262 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3191386578.tar: stat -c "%s %y" /var/lib/minikube/build/build.3191386578.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3191386578.tar': No such file or directory
I0923 12:11:27.593231  515262 ssh_runner.go:362] scp /tmp/build.3191386578.tar --> /var/lib/minikube/build/build.3191386578.tar (3072 bytes)
I0923 12:11:27.616866  515262 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3191386578
I0923 12:11:27.627759  515262 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3191386578 -xf /var/lib/minikube/build/build.3191386578.tar
I0923 12:11:27.637642  515262 docker.go:360] Building image: /var/lib/minikube/build/build.3191386578
I0923 12:11:27.637756  515262 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-544435 /var/lib/minikube/build/build.3191386578
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:bc63715d3ec70cbcafa025c2ed5618c143faddde7782e2625908fd1a8f74c613 done
#8 naming to localhost/my-image:functional-544435
#8 naming to localhost/my-image:functional-544435 done
#8 DONE 0.1s
I0923 12:11:31.351241  515262 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-544435 /var/lib/minikube/build/build.3191386578: (3.713448328s)
I0923 12:11:31.351336  515262 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3191386578
I0923 12:11:31.365892  515262 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3191386578.tar
I0923 12:11:31.378929  515262 build_images.go:217] Built localhost/my-image:functional-544435 from /tmp/build.3191386578.tar
I0923 12:11:31.378964  515262 build_images.go:133] succeeded building to: functional-544435
I0923 12:11:31.378969  515262 build_images.go:134] failed building to: 
I0923 12:11:31.378997  515262 main.go:141] libmachine: Making call to close driver server
I0923 12:11:31.379030  515262 main.go:141] libmachine: (functional-544435) Calling .Close
I0923 12:11:31.379365  515262 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:11:31.379388  515262 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:11:31.379390  515262 main.go:141] libmachine: (functional-544435) DBG | Closing plugin on server side
I0923 12:11:31.379397  515262 main.go:141] libmachine: Making call to close driver server
I0923 12:11:31.379409  515262 main.go:141] libmachine: (functional-544435) Calling .Close
I0923 12:11:31.379699  515262 main.go:141] libmachine: Successfully made call to close driver server
I0923 12:11:31.379718  515262 main.go:141] libmachine: Making call to close connection to plugin binary
I0923 12:11:31.379722  515262 main.go:141] libmachine: (functional-544435) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image ls
2024/09/23 12:11:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.830804314s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-544435
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-544435 docker-env) && out/minikube-linux-amd64 status -p functional-544435"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-544435 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (29.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-544435 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-544435 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-srzll" [4a641d4a-e786-4149-9813-0937d22473a1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-srzll" [4a641d4a-e786-4149-9813-0937d22473a1] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 29.281314677s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (29.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image load --daemon kicbase/echo-server:functional-544435 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image load --daemon kicbase/echo-server:functional-544435 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-544435
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image load --daemon kicbase/echo-server:functional-544435 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image save kicbase/echo-server:functional-544435 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image rm kicbase/echo-server:functional-544435 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-544435
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 image save --daemon kicbase/echo-server:functional-544435 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-544435
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "291.969035ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "49.755137ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "324.208712ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.91257ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 service list -o json
functional_test.go:1494: Took "443.468112ms" to run "out/minikube-linux-amd64 -p functional-544435 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdany-port1331301598/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727093478658246963" to /tmp/TestFunctionalparallelMountCmdany-port1331301598/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727093478658246963" to /tmp/TestFunctionalparallelMountCmdany-port1331301598/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727093478658246963" to /tmp/TestFunctionalparallelMountCmdany-port1331301598/001/test-1727093478658246963
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (205.958539ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:11:18.864603  505012 retry.go:31] will retry after 528.660187ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 12:11 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 12:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 12:11 test-1727093478658246963
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh cat /mount-9p/test-1727093478658246963
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-544435 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [347c0c06-06f1-4159-b96b-2a135ae029fb] Pending
helpers_test.go:344: "busybox-mount" [347c0c06-06f1-4159-b96b-2a135ae029fb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [347c0c06-06f1-4159-b96b-2a135ae029fb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [347c0c06-06f1-4159-b96b-2a135ae029fb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004707755s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-544435 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdany-port1331301598/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.197:30683
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.197:30683
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdspecific-port4256485320/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.856491ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:11:27.540800  505012 retry.go:31] will retry after 318.15277ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdspecific-port4256485320/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-544435 ssh "sudo umount -f /mount-9p": exit status 1 (217.905023ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-544435 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdspecific-port4256485320/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2459811314/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2459811314/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2459811314/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T" /mount1: exit status 1 (309.232127ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 12:11:29.205626  505012 retry.go:31] will retry after 297.844024ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-544435 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-544435 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2459811314/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2459811314/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-544435 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2459811314/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-544435
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-544435
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-544435
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (266.85s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-529491 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-529491 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m18.457946028s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-529491 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-529491 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.621004527s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-529491 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-529491 addons enable gvisor: (5.763026478s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [d33f613f-da6e-4b0b-ae78-a3e5d54eb651] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.004136419s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-529491 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [5398a7f0-a4d9-41c8-a8fa-85f755c0c7d0] Pending
helpers_test.go:344: "nginx-gvisor" [5398a7f0-a4d9-41c8-a8fa-85f755c0c7d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [5398a7f0-a4d9-41c8-a8fa-85f755c0c7d0] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 17.004848548s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-529491
E0923 12:56:49.574014  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-529491: (1m32.504801984s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-529491 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-529491 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (32.222664203s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [d33f613f-da6e-4b0b-ae78-a3e5d54eb651] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.00389833s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [5398a7f0-a4d9-41c8-a8fa-85f755c0c7d0] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.005019642s
helpers_test.go:175: Cleaning up "gvisor-529491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-529491
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-529491: (1.072093718s)
--- PASS: TestGvisorAddon (266.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (325.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-501982 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0923 12:11:49.574986  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:49.581457  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:49.592898  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:49.614395  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:49.655854  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:49.737499  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:49.899135  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:50.220898  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:50.863061  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:52.144491  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:54.707021  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:11:59.828847  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:12:10.070442  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:12:30.552045  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:13:11.514744  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:14:33.439782  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:48.522925  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:48.530391  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:48.541802  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:48.563721  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:48.605115  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:48.686609  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:48.848466  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:49.169840  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:49.812045  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:51.093870  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:53.655371  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:15:58.777096  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:16:09.018912  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:16:29.500994  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:16:49.573997  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-501982 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (5m24.366581012s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (325.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- rollout status deployment/busybox
E0923 12:17:10.462704  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-501982 -- rollout status deployment/busybox: (4.826212839s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-7k2dx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-pd8qc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-zkppt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-7k2dx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-pd8qc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-zkppt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-7k2dx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-pd8qc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-zkppt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-7k2dx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-7k2dx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-pd8qc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-pd8qc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-zkppt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-501982 -- exec busybox-7dff88458-zkppt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (65.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-501982 -v=7 --alsologtostderr
E0923 12:17:17.282412  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-501982 -v=7 --alsologtostderr: (1m5.076133006s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (65.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-501982 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp testdata/cp-test.txt ha-501982:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1588095936/001/cp-test_ha-501982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982:/home/docker/cp-test.txt ha-501982-m02:/home/docker/cp-test_ha-501982_ha-501982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m02 "sudo cat /home/docker/cp-test_ha-501982_ha-501982-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982:/home/docker/cp-test.txt ha-501982-m03:/home/docker/cp-test_ha-501982_ha-501982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m03 "sudo cat /home/docker/cp-test_ha-501982_ha-501982-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982:/home/docker/cp-test.txt ha-501982-m04:/home/docker/cp-test_ha-501982_ha-501982-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m04 "sudo cat /home/docker/cp-test_ha-501982_ha-501982-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp testdata/cp-test.txt ha-501982-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1588095936/001/cp-test_ha-501982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m02:/home/docker/cp-test.txt ha-501982:/home/docker/cp-test_ha-501982-m02_ha-501982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982 "sudo cat /home/docker/cp-test_ha-501982-m02_ha-501982.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m02:/home/docker/cp-test.txt ha-501982-m03:/home/docker/cp-test_ha-501982-m02_ha-501982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m03 "sudo cat /home/docker/cp-test_ha-501982-m02_ha-501982-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m02:/home/docker/cp-test.txt ha-501982-m04:/home/docker/cp-test_ha-501982-m02_ha-501982-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m04 "sudo cat /home/docker/cp-test_ha-501982-m02_ha-501982-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp testdata/cp-test.txt ha-501982-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1588095936/001/cp-test_ha-501982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m03:/home/docker/cp-test.txt ha-501982:/home/docker/cp-test_ha-501982-m03_ha-501982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982 "sudo cat /home/docker/cp-test_ha-501982-m03_ha-501982.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m03:/home/docker/cp-test.txt ha-501982-m02:/home/docker/cp-test_ha-501982-m03_ha-501982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m02 "sudo cat /home/docker/cp-test_ha-501982-m03_ha-501982-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m03:/home/docker/cp-test.txt ha-501982-m04:/home/docker/cp-test_ha-501982-m03_ha-501982-m04.txt
E0923 12:18:32.384863  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m04 "sudo cat /home/docker/cp-test_ha-501982-m03_ha-501982-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp testdata/cp-test.txt ha-501982-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1588095936/001/cp-test_ha-501982-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m04:/home/docker/cp-test.txt ha-501982:/home/docker/cp-test_ha-501982-m04_ha-501982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982 "sudo cat /home/docker/cp-test_ha-501982-m04_ha-501982.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m04:/home/docker/cp-test.txt ha-501982-m02:/home/docker/cp-test_ha-501982-m04_ha-501982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m02 "sudo cat /home/docker/cp-test_ha-501982-m04_ha-501982-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 cp ha-501982-m04:/home/docker/cp-test.txt ha-501982-m03:/home/docker/cp-test_ha-501982-m04_ha-501982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 ssh -n ha-501982-m03 "sudo cat /home/docker/cp-test_ha-501982-m04_ha-501982-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-501982 node stop m02 -v=7 --alsologtostderr: (13.309503755s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-501982 status -v=7 --alsologtostderr: exit status 7 (667.279601ms)

                                                
                                                
-- stdout --
	ha-501982
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-501982-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501982-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-501982-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:18:49.256254  520883 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:18:49.256388  520883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:18:49.256400  520883 out.go:358] Setting ErrFile to fd 2...
	I0923 12:18:49.256407  520883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:18:49.256677  520883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	I0923 12:18:49.256942  520883 out.go:352] Setting JSON to false
	I0923 12:18:49.256990  520883 mustload.go:65] Loading cluster: ha-501982
	I0923 12:18:49.257041  520883 notify.go:220] Checking for updates...
	I0923 12:18:49.257595  520883 config.go:182] Loaded profile config "ha-501982": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:18:49.257620  520883 status.go:174] checking status of ha-501982 ...
	I0923 12:18:49.258146  520883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:18:49.258219  520883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:18:49.273708  520883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45989
	I0923 12:18:49.274253  520883 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:18:49.274932  520883 main.go:141] libmachine: Using API Version  1
	I0923 12:18:49.274967  520883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:18:49.275424  520883 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:18:49.275649  520883 main.go:141] libmachine: (ha-501982) Calling .GetState
	I0923 12:18:49.277388  520883 status.go:364] ha-501982 host status = "Running" (err=<nil>)
	I0923 12:18:49.277405  520883 host.go:66] Checking if "ha-501982" exists ...
	I0923 12:18:49.277812  520883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:18:49.277882  520883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:18:49.293939  520883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34323
	I0923 12:18:49.294458  520883 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:18:49.295096  520883 main.go:141] libmachine: Using API Version  1
	I0923 12:18:49.295122  520883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:18:49.295442  520883 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:18:49.295626  520883 main.go:141] libmachine: (ha-501982) Calling .GetIP
	I0923 12:18:49.298672  520883 main.go:141] libmachine: (ha-501982) DBG | domain ha-501982 has defined MAC address 52:54:00:95:80:f1 in network mk-ha-501982
	I0923 12:18:49.299164  520883 main.go:141] libmachine: (ha-501982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:80:f1", ip: ""} in network mk-ha-501982: {Iface:virbr1 ExpiryTime:2024-09-23 13:11:56 +0000 UTC Type:0 Mac:52:54:00:95:80:f1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-501982 Clientid:01:52:54:00:95:80:f1}
	I0923 12:18:49.299194  520883 main.go:141] libmachine: (ha-501982) DBG | domain ha-501982 has defined IP address 192.168.39.94 and MAC address 52:54:00:95:80:f1 in network mk-ha-501982
	I0923 12:18:49.299355  520883 host.go:66] Checking if "ha-501982" exists ...
	I0923 12:18:49.299660  520883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:18:49.299699  520883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:18:49.315739  520883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37385
	I0923 12:18:49.316137  520883 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:18:49.316712  520883 main.go:141] libmachine: Using API Version  1
	I0923 12:18:49.316739  520883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:18:49.317069  520883 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:18:49.317269  520883 main.go:141] libmachine: (ha-501982) Calling .DriverName
	I0923 12:18:49.317518  520883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:18:49.317563  520883 main.go:141] libmachine: (ha-501982) Calling .GetSSHHostname
	I0923 12:18:49.320878  520883 main.go:141] libmachine: (ha-501982) DBG | domain ha-501982 has defined MAC address 52:54:00:95:80:f1 in network mk-ha-501982
	I0923 12:18:49.321471  520883 main.go:141] libmachine: (ha-501982) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:80:f1", ip: ""} in network mk-ha-501982: {Iface:virbr1 ExpiryTime:2024-09-23 13:11:56 +0000 UTC Type:0 Mac:52:54:00:95:80:f1 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:ha-501982 Clientid:01:52:54:00:95:80:f1}
	I0923 12:18:49.321509  520883 main.go:141] libmachine: (ha-501982) DBG | domain ha-501982 has defined IP address 192.168.39.94 and MAC address 52:54:00:95:80:f1 in network mk-ha-501982
	I0923 12:18:49.321668  520883 main.go:141] libmachine: (ha-501982) Calling .GetSSHPort
	I0923 12:18:49.321832  520883 main.go:141] libmachine: (ha-501982) Calling .GetSSHKeyPath
	I0923 12:18:49.321956  520883 main.go:141] libmachine: (ha-501982) Calling .GetSSHUsername
	I0923 12:18:49.322067  520883 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/ha-501982/id_rsa Username:docker}
	I0923 12:18:49.418817  520883 ssh_runner.go:195] Run: systemctl --version
	I0923 12:18:49.424669  520883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:18:49.441266  520883 kubeconfig.go:125] found "ha-501982" server: "https://192.168.39.254:8443"
	I0923 12:18:49.441306  520883 api_server.go:166] Checking apiserver status ...
	I0923 12:18:49.441342  520883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:18:49.460074  520883 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1948/cgroup
	W0923 12:18:49.470520  520883 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1948/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 12:18:49.470577  520883 ssh_runner.go:195] Run: ls
	I0923 12:18:49.474682  520883 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0923 12:18:49.479199  520883 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0923 12:18:49.479222  520883 status.go:456] ha-501982 apiserver status = Running (err=<nil>)
	I0923 12:18:49.479232  520883 status.go:176] ha-501982 status: &{Name:ha-501982 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:18:49.479248  520883 status.go:174] checking status of ha-501982-m02 ...
	I0923 12:18:49.479534  520883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:18:49.479567  520883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:18:49.496228  520883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I0923 12:18:49.496669  520883 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:18:49.497208  520883 main.go:141] libmachine: Using API Version  1
	I0923 12:18:49.497234  520883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:18:49.497560  520883 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:18:49.497808  520883 main.go:141] libmachine: (ha-501982-m02) Calling .GetState
	I0923 12:18:49.499698  520883 status.go:364] ha-501982-m02 host status = "Stopped" (err=<nil>)
	I0923 12:18:49.499710  520883 status.go:377] host is not running, skipping remaining checks
	I0923 12:18:49.499716  520883 status.go:176] ha-501982-m02 status: &{Name:ha-501982-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:18:49.499736  520883 status.go:174] checking status of ha-501982-m03 ...
	I0923 12:18:49.500027  520883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:18:49.500067  520883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:18:49.515905  520883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36607
	I0923 12:18:49.516408  520883 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:18:49.516886  520883 main.go:141] libmachine: Using API Version  1
	I0923 12:18:49.516915  520883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:18:49.517265  520883 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:18:49.517476  520883 main.go:141] libmachine: (ha-501982-m03) Calling .GetState
	I0923 12:18:49.519209  520883 status.go:364] ha-501982-m03 host status = "Running" (err=<nil>)
	I0923 12:18:49.519224  520883 host.go:66] Checking if "ha-501982-m03" exists ...
	I0923 12:18:49.519530  520883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:18:49.519566  520883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:18:49.537358  520883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33523
	I0923 12:18:49.537799  520883 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:18:49.538350  520883 main.go:141] libmachine: Using API Version  1
	I0923 12:18:49.538377  520883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:18:49.538770  520883 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:18:49.539038  520883 main.go:141] libmachine: (ha-501982-m03) Calling .GetIP
	I0923 12:18:49.542097  520883 main.go:141] libmachine: (ha-501982-m03) DBG | domain ha-501982-m03 has defined MAC address 52:54:00:26:df:c4 in network mk-ha-501982
	I0923 12:18:49.542439  520883 main.go:141] libmachine: (ha-501982-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:df:c4", ip: ""} in network mk-ha-501982: {Iface:virbr1 ExpiryTime:2024-09-23 13:16:02 +0000 UTC Type:0 Mac:52:54:00:26:df:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-501982-m03 Clientid:01:52:54:00:26:df:c4}
	I0923 12:18:49.542466  520883 main.go:141] libmachine: (ha-501982-m03) DBG | domain ha-501982-m03 has defined IP address 192.168.39.183 and MAC address 52:54:00:26:df:c4 in network mk-ha-501982
	I0923 12:18:49.542690  520883 host.go:66] Checking if "ha-501982-m03" exists ...
	I0923 12:18:49.543035  520883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:18:49.543091  520883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:18:49.558272  520883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40533
	I0923 12:18:49.558842  520883 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:18:49.559454  520883 main.go:141] libmachine: Using API Version  1
	I0923 12:18:49.559477  520883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:18:49.559800  520883 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:18:49.560011  520883 main.go:141] libmachine: (ha-501982-m03) Calling .DriverName
	I0923 12:18:49.560193  520883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:18:49.560219  520883 main.go:141] libmachine: (ha-501982-m03) Calling .GetSSHHostname
	I0923 12:18:49.563095  520883 main.go:141] libmachine: (ha-501982-m03) DBG | domain ha-501982-m03 has defined MAC address 52:54:00:26:df:c4 in network mk-ha-501982
	I0923 12:18:49.563669  520883 main.go:141] libmachine: (ha-501982-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:df:c4", ip: ""} in network mk-ha-501982: {Iface:virbr1 ExpiryTime:2024-09-23 13:16:02 +0000 UTC Type:0 Mac:52:54:00:26:df:c4 Iaid: IPaddr:192.168.39.183 Prefix:24 Hostname:ha-501982-m03 Clientid:01:52:54:00:26:df:c4}
	I0923 12:18:49.563690  520883 main.go:141] libmachine: (ha-501982-m03) DBG | domain ha-501982-m03 has defined IP address 192.168.39.183 and MAC address 52:54:00:26:df:c4 in network mk-ha-501982
	I0923 12:18:49.563865  520883 main.go:141] libmachine: (ha-501982-m03) Calling .GetSSHPort
	I0923 12:18:49.564033  520883 main.go:141] libmachine: (ha-501982-m03) Calling .GetSSHKeyPath
	I0923 12:18:49.564231  520883 main.go:141] libmachine: (ha-501982-m03) Calling .GetSSHUsername
	I0923 12:18:49.564380  520883 sshutil.go:53] new ssh client: &{IP:192.168.39.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/ha-501982-m03/id_rsa Username:docker}
	I0923 12:18:49.647298  520883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:18:49.663394  520883 kubeconfig.go:125] found "ha-501982" server: "https://192.168.39.254:8443"
	I0923 12:18:49.663424  520883 api_server.go:166] Checking apiserver status ...
	I0923 12:18:49.663456  520883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:18:49.682031  520883 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1718/cgroup
	W0923 12:18:49.695606  520883 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1718/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 12:18:49.695672  520883 ssh_runner.go:195] Run: ls
	I0923 12:18:49.699970  520883 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0923 12:18:49.704178  520883 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0923 12:18:49.704205  520883 status.go:456] ha-501982-m03 apiserver status = Running (err=<nil>)
	I0923 12:18:49.704215  520883 status.go:176] ha-501982-m03 status: &{Name:ha-501982-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:18:49.704236  520883 status.go:174] checking status of ha-501982-m04 ...
	I0923 12:18:49.704620  520883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:18:49.704662  520883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:18:49.720410  520883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0923 12:18:49.720837  520883 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:18:49.721308  520883 main.go:141] libmachine: Using API Version  1
	I0923 12:18:49.721330  520883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:18:49.721633  520883 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:18:49.721840  520883 main.go:141] libmachine: (ha-501982-m04) Calling .GetState
	I0923 12:18:49.723500  520883 status.go:364] ha-501982-m04 host status = "Running" (err=<nil>)
	I0923 12:18:49.723516  520883 host.go:66] Checking if "ha-501982-m04" exists ...
	I0923 12:18:49.723787  520883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:18:49.723824  520883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:18:49.739829  520883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I0923 12:18:49.740310  520883 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:18:49.740800  520883 main.go:141] libmachine: Using API Version  1
	I0923 12:18:49.740822  520883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:18:49.741298  520883 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:18:49.741539  520883 main.go:141] libmachine: (ha-501982-m04) Calling .GetIP
	I0923 12:18:49.745312  520883 main.go:141] libmachine: (ha-501982-m04) DBG | domain ha-501982-m04 has defined MAC address 52:54:00:29:9b:19 in network mk-ha-501982
	I0923 12:18:49.745805  520883 main.go:141] libmachine: (ha-501982-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:9b:19", ip: ""} in network mk-ha-501982: {Iface:virbr1 ExpiryTime:2024-09-23 13:17:30 +0000 UTC Type:0 Mac:52:54:00:29:9b:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-501982-m04 Clientid:01:52:54:00:29:9b:19}
	I0923 12:18:49.745824  520883 main.go:141] libmachine: (ha-501982-m04) DBG | domain ha-501982-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:29:9b:19 in network mk-ha-501982
	I0923 12:18:49.746019  520883 host.go:66] Checking if "ha-501982-m04" exists ...
	I0923 12:18:49.746393  520883 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:18:49.746444  520883 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:18:49.763968  520883 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42453
	I0923 12:18:49.764660  520883 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:18:49.765246  520883 main.go:141] libmachine: Using API Version  1
	I0923 12:18:49.765279  520883 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:18:49.765701  520883 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:18:49.765934  520883 main.go:141] libmachine: (ha-501982-m04) Calling .DriverName
	I0923 12:18:49.766194  520883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:18:49.766223  520883 main.go:141] libmachine: (ha-501982-m04) Calling .GetSSHHostname
	I0923 12:18:49.769506  520883 main.go:141] libmachine: (ha-501982-m04) DBG | domain ha-501982-m04 has defined MAC address 52:54:00:29:9b:19 in network mk-ha-501982
	I0923 12:18:49.769970  520883 main.go:141] libmachine: (ha-501982-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:9b:19", ip: ""} in network mk-ha-501982: {Iface:virbr1 ExpiryTime:2024-09-23 13:17:30 +0000 UTC Type:0 Mac:52:54:00:29:9b:19 Iaid: IPaddr:192.168.39.247 Prefix:24 Hostname:ha-501982-m04 Clientid:01:52:54:00:29:9b:19}
	I0923 12:18:49.770002  520883 main.go:141] libmachine: (ha-501982-m04) DBG | domain ha-501982-m04 has defined IP address 192.168.39.247 and MAC address 52:54:00:29:9b:19 in network mk-ha-501982
	I0923 12:18:49.770130  520883 main.go:141] libmachine: (ha-501982-m04) Calling .GetSSHPort
	I0923 12:18:49.770311  520883 main.go:141] libmachine: (ha-501982-m04) Calling .GetSSHKeyPath
	I0923 12:18:49.770475  520883 main.go:141] libmachine: (ha-501982-m04) Calling .GetSSHUsername
	I0923 12:18:49.770633  520883 sshutil.go:53] new ssh client: &{IP:192.168.39.247 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/ha-501982-m04/id_rsa Username:docker}
	I0923 12:18:49.863284  520883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:18:49.877510  520883 status.go:176] ha-501982-m04 status: &{Name:ha-501982-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-501982 node start m02 -v=7 --alsologtostderr: (43.526752897s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (260.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-501982 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-501982 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-501982 -v=7 --alsologtostderr: (40.735709014s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-501982 --wait=true -v=7 --alsologtostderr
E0923 12:20:48.523090  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:21:16.226664  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:21:49.574044  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-501982 --wait=true -v=7 --alsologtostderr: (3m39.552048274s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-501982
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (260.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-501982 node delete m03 -v=7 --alsologtostderr: (6.325420111s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-501982 stop -v=7 --alsologtostderr: (38.246711177s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-501982 status -v=7 --alsologtostderr: exit status 7 (101.803899ms)

                                                
                                                
-- stdout --
	ha-501982
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501982-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-501982-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:24:42.237793  523441 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:24:42.238046  523441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:24:42.238054  523441 out.go:358] Setting ErrFile to fd 2...
	I0923 12:24:42.238058  523441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:24:42.238240  523441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	I0923 12:24:42.238440  523441 out.go:352] Setting JSON to false
	I0923 12:24:42.238474  523441 mustload.go:65] Loading cluster: ha-501982
	I0923 12:24:42.238521  523441 notify.go:220] Checking for updates...
	I0923 12:24:42.238916  523441 config.go:182] Loaded profile config "ha-501982": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:24:42.238938  523441 status.go:174] checking status of ha-501982 ...
	I0923 12:24:42.239389  523441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:24:42.239448  523441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:24:42.254266  523441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38333
	I0923 12:24:42.254791  523441 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:24:42.255370  523441 main.go:141] libmachine: Using API Version  1
	I0923 12:24:42.255388  523441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:24:42.255785  523441 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:24:42.255977  523441 main.go:141] libmachine: (ha-501982) Calling .GetState
	I0923 12:24:42.257583  523441 status.go:364] ha-501982 host status = "Stopped" (err=<nil>)
	I0923 12:24:42.257596  523441 status.go:377] host is not running, skipping remaining checks
	I0923 12:24:42.257601  523441 status.go:176] ha-501982 status: &{Name:ha-501982 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:24:42.257640  523441 status.go:174] checking status of ha-501982-m02 ...
	I0923 12:24:42.257921  523441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:24:42.257954  523441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:24:42.272606  523441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36093
	I0923 12:24:42.273035  523441 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:24:42.273440  523441 main.go:141] libmachine: Using API Version  1
	I0923 12:24:42.273460  523441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:24:42.273780  523441 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:24:42.273930  523441 main.go:141] libmachine: (ha-501982-m02) Calling .GetState
	I0923 12:24:42.275417  523441 status.go:364] ha-501982-m02 host status = "Stopped" (err=<nil>)
	I0923 12:24:42.275433  523441 status.go:377] host is not running, skipping remaining checks
	I0923 12:24:42.275441  523441 status.go:176] ha-501982-m02 status: &{Name:ha-501982-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:24:42.275468  523441 status.go:174] checking status of ha-501982-m04 ...
	I0923 12:24:42.275914  523441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:24:42.275962  523441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:24:42.290929  523441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33245
	I0923 12:24:42.291504  523441 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:24:42.292034  523441 main.go:141] libmachine: Using API Version  1
	I0923 12:24:42.292072  523441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:24:42.292446  523441 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:24:42.292621  523441 main.go:141] libmachine: (ha-501982-m04) Calling .GetState
	I0923 12:24:42.294179  523441 status.go:364] ha-501982-m04 host status = "Stopped" (err=<nil>)
	I0923 12:24:42.294193  523441 status.go:377] host is not running, skipping remaining checks
	I0923 12:24:42.294199  523441 status.go:176] ha-501982-m04 status: &{Name:ha-501982-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (161.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-501982 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0923 12:25:48.522886  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:26:49.574076  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-501982 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m41.018671061s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (161.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-501982 --control-plane -v=7 --alsologtostderr
E0923 12:28:12.646132  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-501982 --control-plane -v=7 --alsologtostderr: (1m19.159874019s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-501982 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (44.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-324671 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-324671 --driver=kvm2 : (44.716395086s)
--- PASS: TestImageBuild/serial/Setup (44.72s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-324671
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-324671: (2.767625174s)
--- PASS: TestImageBuild/serial/NormalBuild (2.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-324671
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-324671: (1.427602945s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-324671
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-324671
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (91.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-249719 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E0923 12:30:48.522954  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-249719 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m31.569666954s)
--- PASS: TestJSONOutput/start/Command (91.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-249719 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-249719 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.43s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-249719 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-249719 --output=json --user=testUser: (7.433964097s)
--- PASS: TestJSONOutput/stop/Command (7.43s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-907875 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-907875 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.593515ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64bdf892-3e65-4f9d-836e-d0da7f947659","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-907875] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"64934713-333d-41fa-819b-e44517191dda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"fd964d53-01af-492f-9d34-b618d716e670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9eb964a1-e554-4164-8086-8fc097c1778c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig"}}
	{"specversion":"1.0","id":"36438e6c-1f49-48ac-93f6-adfb1aae36d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube"}}
	{"specversion":"1.0","id":"f3f967b5-6bb9-471b-bb9e-03de883e89e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a241e627-03fc-4f86-a35f-d16869404803","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9d8f576e-5c94-4693-bae9-8c74c46879e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-907875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-907875
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (97.14s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-080362 --driver=kvm2 
E0923 12:31:49.574995  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-080362 --driver=kvm2 : (46.774474162s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-092684 --driver=kvm2 
E0923 12:32:11.589276  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-092684 --driver=kvm2 : (47.517332929s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-080362
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-092684
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-092684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-092684
helpers_test.go:175: Cleaning up "first-080362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-080362
--- PASS: TestMinikubeProfile (97.14s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-329174 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-329174 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.135416808s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-329174 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-329174 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-351902 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-351902 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.071609621s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-351902 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-351902 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-329174 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-351902 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-351902 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-351902
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-351902: (2.284571927s)
--- PASS: TestMountStart/serial/Stop (2.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-351902
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-351902: (25.88660644s)
--- PASS: TestMountStart/serial/RestartStopped (26.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-351902 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-351902 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (127.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-915704 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0923 12:35:48.522297  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-915704 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m6.851904268s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (127.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-915704 -- rollout status deployment/busybox: (3.401254341s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- exec busybox-7dff88458-h6n4v -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- exec busybox-7dff88458-xqswx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- exec busybox-7dff88458-h6n4v -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- exec busybox-7dff88458-xqswx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- exec busybox-7dff88458-h6n4v -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- exec busybox-7dff88458-xqswx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- exec busybox-7dff88458-h6n4v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- exec busybox-7dff88458-h6n4v -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- exec busybox-7dff88458-xqswx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-915704 -- exec busybox-7dff88458-xqswx -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-915704 -v 3 --alsologtostderr
E0923 12:36:49.574332  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-915704 -v 3 --alsologtostderr: (57.125098194s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.71s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-915704 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.60s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp testdata/cp-test.txt multinode-915704:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp multinode-915704:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile750648462/001/cp-test_multinode-915704.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp multinode-915704:/home/docker/cp-test.txt multinode-915704-m02:/home/docker/cp-test_multinode-915704_multinode-915704-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m02 "sudo cat /home/docker/cp-test_multinode-915704_multinode-915704-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp multinode-915704:/home/docker/cp-test.txt multinode-915704-m03:/home/docker/cp-test_multinode-915704_multinode-915704-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m03 "sudo cat /home/docker/cp-test_multinode-915704_multinode-915704-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp testdata/cp-test.txt multinode-915704-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp multinode-915704-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile750648462/001/cp-test_multinode-915704-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp multinode-915704-m02:/home/docker/cp-test.txt multinode-915704:/home/docker/cp-test_multinode-915704-m02_multinode-915704.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704 "sudo cat /home/docker/cp-test_multinode-915704-m02_multinode-915704.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp multinode-915704-m02:/home/docker/cp-test.txt multinode-915704-m03:/home/docker/cp-test_multinode-915704-m02_multinode-915704-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m03 "sudo cat /home/docker/cp-test_multinode-915704-m02_multinode-915704-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp testdata/cp-test.txt multinode-915704-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp multinode-915704-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile750648462/001/cp-test_multinode-915704-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp multinode-915704-m03:/home/docker/cp-test.txt multinode-915704:/home/docker/cp-test_multinode-915704-m03_multinode-915704.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704 "sudo cat /home/docker/cp-test_multinode-915704-m03_multinode-915704.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 cp multinode-915704-m03:/home/docker/cp-test.txt multinode-915704-m02:/home/docker/cp-test_multinode-915704-m03_multinode-915704-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 ssh -n multinode-915704-m02 "sudo cat /home/docker/cp-test_multinode-915704-m03_multinode-915704-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-915704 node stop m03: (2.462422116s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-915704 status: exit status 7 (433.510554ms)

                                                
                                                
-- stdout --
	multinode-915704
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-915704-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-915704-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-915704 status --alsologtostderr: exit status 7 (451.533365ms)

                                                
                                                
-- stdout --
	multinode-915704
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-915704-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-915704-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:37:48.794961  531993 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:37:48.795072  531993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:37:48.795081  531993 out.go:358] Setting ErrFile to fd 2...
	I0923 12:37:48.795085  531993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:37:48.795245  531993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	I0923 12:37:48.795446  531993 out.go:352] Setting JSON to false
	I0923 12:37:48.795486  531993 mustload.go:65] Loading cluster: multinode-915704
	I0923 12:37:48.795549  531993 notify.go:220] Checking for updates...
	I0923 12:37:48.795853  531993 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:37:48.795873  531993 status.go:174] checking status of multinode-915704 ...
	I0923 12:37:48.796348  531993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:37:48.796407  531993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:37:48.815053  531993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0923 12:37:48.815615  531993 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:37:48.816285  531993 main.go:141] libmachine: Using API Version  1
	I0923 12:37:48.816311  531993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:37:48.816707  531993 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:37:48.816910  531993 main.go:141] libmachine: (multinode-915704) Calling .GetState
	I0923 12:37:48.818609  531993 status.go:364] multinode-915704 host status = "Running" (err=<nil>)
	I0923 12:37:48.818625  531993 host.go:66] Checking if "multinode-915704" exists ...
	I0923 12:37:48.818993  531993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:37:48.819069  531993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:37:48.835194  531993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34367
	I0923 12:37:48.835776  531993 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:37:48.836369  531993 main.go:141] libmachine: Using API Version  1
	I0923 12:37:48.836399  531993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:37:48.836864  531993 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:37:48.837128  531993 main.go:141] libmachine: (multinode-915704) Calling .GetIP
	I0923 12:37:48.841120  531993 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:37:48.841610  531993 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:34:41 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:37:48.841639  531993 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:37:48.841793  531993 host.go:66] Checking if "multinode-915704" exists ...
	I0923 12:37:48.842180  531993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:37:48.842236  531993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:37:48.859327  531993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44819
	I0923 12:37:48.859870  531993 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:37:48.860491  531993 main.go:141] libmachine: Using API Version  1
	I0923 12:37:48.860520  531993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:37:48.860857  531993 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:37:48.861133  531993 main.go:141] libmachine: (multinode-915704) Calling .DriverName
	I0923 12:37:48.861364  531993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:37:48.861401  531993 main.go:141] libmachine: (multinode-915704) Calling .GetSSHHostname
	I0923 12:37:48.864534  531993 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:37:48.865083  531993 main.go:141] libmachine: (multinode-915704) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:99:2b", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:34:41 +0000 UTC Type:0 Mac:52:54:00:1f:99:2b Iaid: IPaddr:192.168.39.233 Prefix:24 Hostname:multinode-915704 Clientid:01:52:54:00:1f:99:2b}
	I0923 12:37:48.865119  531993 main.go:141] libmachine: (multinode-915704) DBG | domain multinode-915704 has defined IP address 192.168.39.233 and MAC address 52:54:00:1f:99:2b in network mk-multinode-915704
	I0923 12:37:48.865357  531993 main.go:141] libmachine: (multinode-915704) Calling .GetSSHPort
	I0923 12:37:48.865578  531993 main.go:141] libmachine: (multinode-915704) Calling .GetSSHKeyPath
	I0923 12:37:48.865746  531993 main.go:141] libmachine: (multinode-915704) Calling .GetSSHUsername
	I0923 12:37:48.865940  531993 sshutil.go:53] new ssh client: &{IP:192.168.39.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704/id_rsa Username:docker}
	I0923 12:37:48.947462  531993 ssh_runner.go:195] Run: systemctl --version
	I0923 12:37:48.955817  531993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:37:48.976382  531993 kubeconfig.go:125] found "multinode-915704" server: "https://192.168.39.233:8443"
	I0923 12:37:48.976430  531993 api_server.go:166] Checking apiserver status ...
	I0923 12:37:48.976466  531993 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:37:48.990621  531993 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1910/cgroup
	W0923 12:37:49.000243  531993 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1910/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0923 12:37:49.000306  531993 ssh_runner.go:195] Run: ls
	I0923 12:37:49.004708  531993 api_server.go:253] Checking apiserver healthz at https://192.168.39.233:8443/healthz ...
	I0923 12:37:49.008785  531993 api_server.go:279] https://192.168.39.233:8443/healthz returned 200:
	ok
	I0923 12:37:49.008813  531993 status.go:456] multinode-915704 apiserver status = Running (err=<nil>)
	I0923 12:37:49.008826  531993 status.go:176] multinode-915704 status: &{Name:multinode-915704 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:37:49.008851  531993 status.go:174] checking status of multinode-915704-m02 ...
	I0923 12:37:49.009329  531993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:37:49.009377  531993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:37:49.026943  531993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0923 12:37:49.027421  531993 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:37:49.027936  531993 main.go:141] libmachine: Using API Version  1
	I0923 12:37:49.027961  531993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:37:49.028304  531993 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:37:49.028512  531993 main.go:141] libmachine: (multinode-915704-m02) Calling .GetState
	I0923 12:37:49.030175  531993 status.go:364] multinode-915704-m02 host status = "Running" (err=<nil>)
	I0923 12:37:49.030193  531993 host.go:66] Checking if "multinode-915704-m02" exists ...
	I0923 12:37:49.030485  531993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:37:49.030526  531993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:37:49.047840  531993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44347
	I0923 12:37:49.048388  531993 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:37:49.048960  531993 main.go:141] libmachine: Using API Version  1
	I0923 12:37:49.048991  531993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:37:49.049335  531993 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:37:49.049526  531993 main.go:141] libmachine: (multinode-915704-m02) Calling .GetIP
	I0923 12:37:49.053278  531993 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:37:49.053784  531993 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:35:54 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:37:49.053816  531993 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:37:49.054012  531993 host.go:66] Checking if "multinode-915704-m02" exists ...
	I0923 12:37:49.054387  531993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:37:49.054440  531993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:37:49.070352  531993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0923 12:37:49.070898  531993 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:37:49.071439  531993 main.go:141] libmachine: Using API Version  1
	I0923 12:37:49.071467  531993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:37:49.071825  531993 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:37:49.072018  531993 main.go:141] libmachine: (multinode-915704-m02) Calling .DriverName
	I0923 12:37:49.072244  531993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:37:49.072267  531993 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHHostname
	I0923 12:37:49.075361  531993 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:37:49.075763  531993 main.go:141] libmachine: (multinode-915704-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ce:58", ip: ""} in network mk-multinode-915704: {Iface:virbr1 ExpiryTime:2024-09-23 13:35:54 +0000 UTC Type:0 Mac:52:54:00:38:ce:58 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:multinode-915704-m02 Clientid:01:52:54:00:38:ce:58}
	I0923 12:37:49.075796  531993 main.go:141] libmachine: (multinode-915704-m02) DBG | domain multinode-915704-m02 has defined IP address 192.168.39.118 and MAC address 52:54:00:38:ce:58 in network mk-multinode-915704
	I0923 12:37:49.075992  531993 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHPort
	I0923 12:37:49.076198  531993 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHKeyPath
	I0923 12:37:49.076365  531993 main.go:141] libmachine: (multinode-915704-m02) Calling .GetSSHUsername
	I0923 12:37:49.076500  531993 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19690-497735/.minikube/machines/multinode-915704-m02/id_rsa Username:docker}
	I0923 12:37:49.163529  531993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 12:37:49.176737  531993 status.go:176] multinode-915704-m02 status: &{Name:multinode-915704-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:37:49.176786  531993 status.go:174] checking status of multinode-915704-m03 ...
	I0923 12:37:49.177137  531993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:37:49.177192  531993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:37:49.193486  531993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45059
	I0923 12:37:49.194182  531993 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:37:49.194694  531993 main.go:141] libmachine: Using API Version  1
	I0923 12:37:49.194719  531993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:37:49.195161  531993 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:37:49.195309  531993 main.go:141] libmachine: (multinode-915704-m03) Calling .GetState
	I0923 12:37:49.197061  531993 status.go:364] multinode-915704-m03 host status = "Stopped" (err=<nil>)
	I0923 12:37:49.197076  531993 status.go:377] host is not running, skipping remaining checks
	I0923 12:37:49.197087  531993 status.go:176] multinode-915704-m03 status: &{Name:multinode-915704-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-915704 node start m03 -v=7 --alsologtostderr: (41.342423928s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (175.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-915704
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-915704
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-915704: (28.017484491s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-915704 --wait=true -v=8 --alsologtostderr
E0923 12:40:48.522364  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-915704 --wait=true -v=8 --alsologtostderr: (2m27.520058475s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-915704
--- PASS: TestMultiNode/serial/RestartKeepsNodes (175.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-915704 node delete m03: (1.782687803s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 stop
E0923 12:41:49.574330  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-915704 stop: (24.818122739s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-915704 status: exit status 7 (91.031855ms)

                                                
                                                
-- stdout --
	multinode-915704
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-915704-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-915704 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-915704 status --alsologtostderr: exit status 7 (86.011413ms)

                                                
                                                
-- stdout --
	multinode-915704
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-915704-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:41:54.112990  533765 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:41:54.113160  533765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:41:54.113172  533765 out.go:358] Setting ErrFile to fd 2...
	I0923 12:41:54.113176  533765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:41:54.113357  533765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-497735/.minikube/bin
	I0923 12:41:54.113541  533765 out.go:352] Setting JSON to false
	I0923 12:41:54.113578  533765 mustload.go:65] Loading cluster: multinode-915704
	I0923 12:41:54.113632  533765 notify.go:220] Checking for updates...
	I0923 12:41:54.113991  533765 config.go:182] Loaded profile config "multinode-915704": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:41:54.114018  533765 status.go:174] checking status of multinode-915704 ...
	I0923 12:41:54.114444  533765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:41:54.114514  533765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:41:54.130160  533765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37525
	I0923 12:41:54.130739  533765 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:41:54.131329  533765 main.go:141] libmachine: Using API Version  1
	I0923 12:41:54.131357  533765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:41:54.131791  533765 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:41:54.132003  533765 main.go:141] libmachine: (multinode-915704) Calling .GetState
	I0923 12:41:54.133677  533765 status.go:364] multinode-915704 host status = "Stopped" (err=<nil>)
	I0923 12:41:54.133691  533765 status.go:377] host is not running, skipping remaining checks
	I0923 12:41:54.133696  533765 status.go:176] multinode-915704 status: &{Name:multinode-915704 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 12:41:54.133738  533765 status.go:174] checking status of multinode-915704-m02 ...
	I0923 12:41:54.134033  533765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0923 12:41:54.134077  533765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0923 12:41:54.149449  533765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I0923 12:41:54.149982  533765 main.go:141] libmachine: () Calling .GetVersion
	I0923 12:41:54.150769  533765 main.go:141] libmachine: Using API Version  1
	I0923 12:41:54.150807  533765 main.go:141] libmachine: () Calling .SetConfigRaw
	I0923 12:41:54.151189  533765 main.go:141] libmachine: () Calling .GetMachineName
	I0923 12:41:54.151398  533765 main.go:141] libmachine: (multinode-915704-m02) Calling .GetState
	I0923 12:41:54.153133  533765 status.go:364] multinode-915704-m02 host status = "Stopped" (err=<nil>)
	I0923 12:41:54.153150  533765 status.go:377] host is not running, skipping remaining checks
	I0923 12:41:54.153158  533765 status.go:176] multinode-915704-m02 status: &{Name:multinode-915704-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-915704
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-915704-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-915704-m02 --driver=kvm2 : exit status 14 (67.250691ms)

                                                
                                                
-- stdout --
	* [multinode-915704-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-915704-m02' is duplicated with machine name 'multinode-915704-m02' in profile 'multinode-915704'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-915704-m03 --driver=kvm2 
E0923 12:44:52.648316  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-915704-m03 --driver=kvm2 : (47.121827608s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-915704
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-915704: exit status 80 (220.073053ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-915704 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-915704-m03 already exists in multinode-915704-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-915704-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-915704-m03: (1.014366758s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.47s)

                                                
                                    
x
+
TestPreload (193.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-260838 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0923 12:45:48.522225  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:46:49.574304  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-260838 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m2.225654656s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-260838 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-260838 image pull gcr.io/k8s-minikube/busybox: (2.205224129s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-260838
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-260838: (12.565887069s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-260838 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-260838 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (55.688130038s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-260838 image list
helpers_test.go:175: Cleaning up "test-preload-260838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-260838
--- PASS: TestPreload (193.79s)

                                                
                                    
x
+
TestScheduledStopUnix (118.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-134286 --memory=2048 --driver=kvm2 
E0923 12:48:51.592253  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-134286 --memory=2048 --driver=kvm2 : (46.433622461s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134286 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-134286 -n scheduled-stop-134286
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134286 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 12:49:21.360151  505012 retry.go:31] will retry after 93.821µs: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.361347  505012 retry.go:31] will retry after 145.436µs: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.362523  505012 retry.go:31] will retry after 147.342µs: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.363662  505012 retry.go:31] will retry after 205.772µs: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.364792  505012 retry.go:31] will retry after 750.737µs: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.365947  505012 retry.go:31] will retry after 886.403µs: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.367074  505012 retry.go:31] will retry after 1.22985ms: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.369301  505012 retry.go:31] will retry after 2.441658ms: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.372667  505012 retry.go:31] will retry after 3.215114ms: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.376892  505012 retry.go:31] will retry after 4.885524ms: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.382143  505012 retry.go:31] will retry after 7.00849ms: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.389886  505012 retry.go:31] will retry after 7.423988ms: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.398183  505012 retry.go:31] will retry after 11.296851ms: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.410482  505012 retry.go:31] will retry after 28.563258ms: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
I0923 12:49:21.439793  505012 retry.go:31] will retry after 38.451002ms: open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/scheduled-stop-134286/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134286 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-134286 -n scheduled-stop-134286
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-134286
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-134286 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-134286
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-134286: exit status 7 (67.987884ms)

                                                
                                                
-- stdout --
	scheduled-stop-134286
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-134286 -n scheduled-stop-134286
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-134286 -n scheduled-stop-134286: exit status 7 (65.536123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-134286" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-134286
--- PASS: TestScheduledStopUnix (118.12s)

                                                
                                    
x
+
TestSkaffold (128.2s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3717314532 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-408531 --memory=2600 --driver=kvm2 
E0923 12:50:48.523535  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-408531 --memory=2600 --driver=kvm2 : (46.38402367s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3717314532 run --minikube-profile skaffold-408531 --kube-context skaffold-408531 --status-check=true --port-forward=false --interactive=false
E0923 12:51:49.574215  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3717314532 run --minikube-profile skaffold-408531 --kube-context skaffold-408531 --status-check=true --port-forward=false --interactive=false: (1m6.705910203s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-66978c6996-nhgxx" [890b4254-9236-4fe5-bf24-f630d39c7f5f] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004321218s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5f9888db6d-mqlcd" [446feac6-ad5f-4c95-99f8-a909f78a0a0d] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004136733s
helpers_test.go:175: Cleaning up "skaffold-408531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-408531
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-408531: (1.187607344s)
--- PASS: TestSkaffold (128.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (135.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2140649587 start -p running-upgrade-957148 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2140649587 start -p running-upgrade-957148 --memory=2200 --vm-driver=kvm2 : (1m21.377571307s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-957148 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-957148 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (50.986960581s)
helpers_test.go:175: Cleaning up "running-upgrade-957148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-957148
E0923 12:57:31.249030  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-957148: (1.233226187s)
--- PASS: TestRunningBinaryUpgrade (135.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (167.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-760278 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
E0923 12:55:48.523018  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-760278 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m23.649093393s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-760278
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-760278: (3.310489433s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-760278 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-760278 status --format={{.Host}}: exit status 7 (92.609269ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-760278 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
E0923 12:57:28.676188  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:57:28.683949  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:57:28.695454  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:57:28.716865  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:57:28.759209  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:57:28.840725  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:57:29.003011  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:57:29.324940  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:57:29.967296  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-760278 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (43.457350438s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-760278 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-760278 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-760278 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (91.810619ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-760278] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-760278
	    minikube start -p kubernetes-upgrade-760278 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7602782 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-760278 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-760278 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 
E0923 12:58:09.657202  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-760278 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2 : (35.259659939s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-760278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-760278
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-760278: (1.454927497s)
--- PASS: TestKubernetesUpgrade (167.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0923 12:57:33.810286  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStoppedBinaryUpgrade/Setup (2.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (161.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3016534975 start -p stopped-upgrade-641701 --memory=2200 --vm-driver=kvm2 
E0923 12:57:38.933169  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 12:57:49.175201  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3016534975 start -p stopped-upgrade-641701 --memory=2200 --vm-driver=kvm2 : (1m1.672097236s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3016534975 -p stopped-upgrade-641701 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3016534975 -p stopped-upgrade-641701 stop: (13.173075193s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-641701 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0923 12:58:50.618691  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-641701 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m26.621658972s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (161.47s)

                                                
                                    
x
+
TestPause/serial/Start (73.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-818315 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-818315 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m13.081446618s)
--- PASS: TestPause/serial/Start (73.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-281128 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-281128 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (71.628662ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-281128] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-497735/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-497735/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (86.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-281128 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-281128 --driver=kvm2 : (1m25.626498989s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-281128 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (86.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (130.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (2m10.197690036s)
--- PASS: TestNetworkPlugins/group/auto/Start (130.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (78.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-818315 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-818315 --alsologtostderr -v=1 --driver=kvm2 : (1m18.508616204s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (78.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-281128 --no-kubernetes --driver=kvm2 
E0923 13:00:12.543261  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-281128 --no-kubernetes --driver=kvm2 : (29.216642284s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-281128 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-281128 status -o json: exit status 2 (303.381154ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-281128","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-281128
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-281128: (1.096344174s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-641701
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-641701: (1.017120538s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m24.323544706s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-281128 --no-kubernetes --driver=kvm2 
E0923 13:00:48.522858  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-281128 --no-kubernetes --driver=kvm2 : (48.942104295s)
--- PASS: TestNoKubernetes/serial/Start (48.94s)

                                                
                                    
x
+
TestPause/serial/Pause (1.01s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-818315 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-818315 --alsologtostderr -v=5: (1.007626723s)
--- PASS: TestPause/serial/Pause (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-818315 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-818315 --output=json --layout=cluster: exit status 2 (304.343921ms)

                                                
                                                
-- stdout --
	{"Name":"pause-818315","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-818315","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-818315 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-818315 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-818315 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-818315 --alsologtostderr -v=5: (1.103122938s)
--- PASS: TestPause/serial/DeletePaused (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (96.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m36.129551487s)
--- PASS: TestNetworkPlugins/group/calico/Start (96.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-066078 "pgrep -a kubelet"
I0923 13:01:20.237477  505012 config.go:182] Loaded profile config "auto-066078": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-066078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fssd6" [e9f56603-bd6e-4c50-b18b-49b6910105a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fssd6" [e9f56603-bd6e-4c50-b18b-49b6910105a5] Running
E0923 13:01:27.893740  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:01:30.455968  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:01:32.650590  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004889816s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-281128 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-281128 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.840661ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-281128
E0923 13:01:25.322993  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:01:25.329443  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:01:25.340868  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:01:25.362400  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:01:25.403900  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:01:25.485478  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:01:25.647770  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:01:25.970103  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-281128: (2.31219603s)
--- PASS: TestNoKubernetes/serial/Stop (2.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (44.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-281128 --driver=kvm2 
E0923 13:01:26.611502  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-281128 --driver=kvm2 : (44.615459269s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (44.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-066078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-m748t" [b421be72-ceaf-44ee-a455-b0810e244ba7] Running
E0923 13:01:45.819980  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00791896s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-066078 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-066078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t2w4k" [908d8045-aa3c-4630-8340-6e0373135401] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 13:01:49.574465  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-t2w4k" [908d8045-aa3c-4630-8340-6e0373135401] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.006354626s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (92.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m32.930122583s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (92.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-066078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-281128 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-281128 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.566142ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (120.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (2m0.435185661s)
--- PASS: TestNetworkPlugins/group/false/Start (120.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (108.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E0923 13:02:28.676325  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m48.60479094s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (108.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-cxwnr" [da87ca77-9ab7-4e61-82ee-55e1ccc0a0b6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004253716s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-066078 "pgrep -a kubelet"
I0923 13:02:46.952228  505012 config.go:182] Loaded profile config "calico-066078": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-066078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ldqrb" [f459d17e-7ab9-4d2e-b7c3-2a872daf6522] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 13:02:47.264345  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-ldqrb" [f459d17e-7ab9-4d2e-b7c3-2a872daf6522] Running
E0923 13:02:56.385047  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003851233s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-066078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m28.785437621s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-066078 "pgrep -a kubelet"
I0923 13:03:23.216984  505012 config.go:182] Loaded profile config "custom-flannel-066078": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-066078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ckpn8" [45a9f01a-4703-428d-bcf0-f785f945cd5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ckpn8" [45a9f01a-4703-428d-bcf0-f785f945cd5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.007028878s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-066078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E0923 13:04:09.185879  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m18.27648657s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-066078 "pgrep -a kubelet"
I0923 13:04:09.599418  505012 config.go:182] Loaded profile config "enable-default-cni-066078": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-066078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vkhrx" [8c75ca7b-37df-47dc-b174-19c82add4ad0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vkhrx" [8c75ca7b-37df-47dc-b174-19c82add4ad0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.004152168s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-066078 "pgrep -a kubelet"
I0923 13:04:12.747213  505012 config.go:182] Loaded profile config "false-066078": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-066078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9p4vk" [3f6db05f-9a33-4b02-97df-e9f71192f0bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9p4vk" [3f6db05f-9a33-4b02-97df-e9f71192f0bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004846456s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-066078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-066078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (91.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-066078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m31.136084503s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (91.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (176.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-798489 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-798489 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m56.170698371s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (176.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-js2l6" [ec2b0e61-53d4-4c5f-b321-b344057d9339] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004883489s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-066078 "pgrep -a kubelet"
I0923 13:04:54.357811  505012 config.go:182] Loaded profile config "flannel-066078": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-066078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p8q2q" [edd87cbe-eb2d-4bb6-ad2d-5a66f05787fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p8q2q" [edd87cbe-eb2d-4bb6-ad2d-5a66f05787fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004540865s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-066078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-066078 "pgrep -a kubelet"
I0923 13:05:10.434111  505012 config.go:182] Loaded profile config "bridge-066078": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-066078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jmzsz" [0c4113bc-0a28-411a-a07b-154451a67446] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jmzsz" [0c4113bc-0a28-411a-a07b-154451a67446] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.004675288s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (84.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-283961 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-283961 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (1m24.732975182s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (84.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-066078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-625814 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
E0923 13:05:48.522827  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-625814 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (1m25.857695588s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-066078 "pgrep -a kubelet"
I0923 13:06:13.941260  505012 config.go:182] Loaded profile config "kubenet-066078": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-066078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pmgwm" [a638e2d9-a0a6-4f41-bb95-9e4c069e8635] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 13:06:20.548216  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:20.554682  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:20.566195  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:20.587623  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:20.629131  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:20.710692  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:20.872819  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:21.194212  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-pmgwm" [a638e2d9-a0a6-4f41-bb95-9e4c069e8635] Running
E0923 13:06:21.836487  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:23.118816  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:25.323052  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:25.681151  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.003671575s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-066078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-066078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-580103 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0923 13:06:47.700039  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kindnet-066078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-580103 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (1m5.05307308s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (65.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-283961 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f6b8f506-692b-4358-be14-e1b6d4e23d7e] Pending
E0923 13:06:49.574601  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f6b8f506-692b-4358-be14-e1b6d4e23d7e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f6b8f506-692b-4358-be14-e1b6d4e23d7e] Running
E0923 13:06:52.821983  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kindnet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:06:53.027832  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.006244695s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-283961 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-283961 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-283961 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-283961 --alsologtostderr -v=3
E0923 13:07:01.526127  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:03.064263  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kindnet-066078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-283961 --alsologtostderr -v=3: (13.336534906s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-625814 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4b873580-da5d-4a84-91ad-0ce80ef0e4ac] Pending
helpers_test.go:344: "busybox" [4b873580-da5d-4a84-91ad-0ce80ef0e4ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4b873580-da5d-4a84-91ad-0ce80ef0e4ac] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.0046727s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-625814 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283961 -n no-preload-283961
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283961 -n no-preload-283961: exit status 7 (76.525245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-283961 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (307.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-283961 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-283961 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.1: (5m7.500401663s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-283961 -n no-preload-283961
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (307.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-625814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-625814 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.014989073s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-625814 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-625814 --alsologtostderr -v=3
E0923 13:07:23.545933  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kindnet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:28.676141  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-625814 --alsologtostderr -v=3: (13.407575271s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-625814 -n embed-certs-625814
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-625814 -n embed-certs-625814: exit status 7 (67.706515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-625814 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (389.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-625814 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-625814 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.1: (6m29.077050143s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-625814 -n embed-certs-625814
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (389.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-798489 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [62876470-1925-4220-991b-60b630e6572c] Pending
E0923 13:07:40.735914  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:40.742608  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:40.754174  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:40.776389  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:40.818903  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:40.902897  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:41.064921  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [62876470-1925-4220-991b-60b630e6572c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0923 13:07:41.387003  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:42.028768  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:42.488182  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:07:43.311005  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [62876470-1925-4220-991b-60b630e6572c] Running
E0923 13:07:45.873170  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00456162s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-798489 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-580103 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e5cbf1bf-e107-4323-bf63-1a02873257a1] Pending
E0923 13:07:50.995098  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [e5cbf1bf-e107-4323-bf63-1a02873257a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e5cbf1bf-e107-4323-bf63-1a02873257a1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004712341s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-580103 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-798489 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-798489 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-798489 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-798489 --alsologtostderr -v=3: (13.346108613s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-580103 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-580103 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-580103 --alsologtostderr -v=3
E0923 13:08:01.237436  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:04.507241  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kindnet-066078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-580103 --alsologtostderr -v=3: (13.340917133s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-798489 -n old-k8s-version-798489
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-798489 -n old-k8s-version-798489: exit status 7 (70.375204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-798489 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (399.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-798489 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-798489 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m39.147074614s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-798489 -n old-k8s-version-798489
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (399.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-580103 -n default-k8s-diff-port-580103
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-580103 -n default-k8s-diff-port-580103: exit status 7 (74.643142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-580103 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (313.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-580103 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1
E0923 13:08:21.719123  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:23.424801  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:23.431203  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:23.442611  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:23.464051  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:23.505450  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:23.586873  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:23.748405  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:24.070053  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:24.711854  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:25.993838  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:28.555840  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:33.677611  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:08:43.919184  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:02.681024  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:04.400554  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:04.410001  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:10.422200  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:10.428727  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:10.440154  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:10.461569  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:10.503012  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:10.584746  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:10.746718  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:11.068215  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:11.710015  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:12.991454  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:13.030013  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:13.036469  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:13.047977  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:13.069439  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:13.110950  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:13.192553  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:13.354689  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:13.676744  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:14.318134  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:15.553581  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:15.600057  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:18.162194  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:20.675223  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:23.283925  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:26.428938  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kindnet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:30.916995  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:33.525527  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:45.362415  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:48.095483  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:48.101920  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:48.113377  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:48.134879  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:48.176408  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:48.257926  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:48.419506  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:48.741868  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:49.384047  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:50.665984  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:51.398809  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:53.227521  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:54.007860  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:09:58.349880  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:08.591462  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:10.715731  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:10.722226  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:10.733683  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:10.755331  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:10.796892  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:10.878860  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:11.040429  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:11.361988  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:12.004086  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:13.285461  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:15.847390  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:20.969113  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:24.602978  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:29.072893  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:31.211119  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:32.360851  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:34.969788  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:48.522871  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/functional-544435/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:10:51.693037  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:07.284454  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:10.034915  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:14.174995  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:14.181483  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:14.192924  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:14.214593  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:14.256063  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:14.337554  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:14.499116  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:14.820861  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:15.462795  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:16.744884  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:19.306868  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:20.547765  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:24.428651  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:25.323338  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/gvisor-529491/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:32.655165  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:34.670725  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:42.566837  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kindnet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:48.252315  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/auto-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:49.574880  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/addons-825629/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:54.282227  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:55.152787  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:11:56.891113  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:12:10.270934  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kindnet-066078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-580103 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.1: (5m13.311646804s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-580103 -n default-k8s-diff-port-580103
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (313.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pnlng" [e56d8e5d-faad-4b0b-a243-7f21a4bf11f3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004723936s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pnlng" [e56d8e5d-faad-4b0b-a243-7f21a4bf11f3] Running
E0923 13:12:28.676674  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:12:31.956886  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00404534s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-283961 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-283961 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-283961 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-283961 -n no-preload-283961
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-283961 -n no-preload-283961: exit status 2 (245.965301ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-283961 -n no-preload-283961
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-283961 -n no-preload-283961: exit status 2 (248.748986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-283961 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-283961 -n no-preload-283961
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-283961 -n no-preload-283961
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-318222 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
E0923 13:12:40.736665  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:12:54.577014  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/bridge-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:13:08.444723  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/calico-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:13:23.424253  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-318222 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (59.700800412s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lt8kn" [9ed99f6f-d845-4474-aa19-33573dc9cbac] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005088127s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lt8kn" [9ed99f6f-d845-4474-aa19-33573dc9cbac] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004498065s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-580103 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-318222 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-318222 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-318222 --alsologtostderr -v=3: (12.77571582s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-580103 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-580103 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-580103 -n default-k8s-diff-port-580103
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-580103 -n default-k8s-diff-port-580103: exit status 2 (272.421727ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-580103 -n default-k8s-diff-port-580103
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-580103 -n default-k8s-diff-port-580103: exit status 2 (282.718153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-580103 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-580103 -n default-k8s-diff-port-580103
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-580103 -n default-k8s-diff-port-580103
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-318222 -n newest-cni-318222
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-318222 -n newest-cni-318222: exit status 7 (86.668277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-318222 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (37.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-318222 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1
E0923 13:13:51.126642  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/custom-flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:13:51.747387  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/skaffold-408531/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:13:58.037172  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kubenet-066078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-318222 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.1: (37.293301977s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-318222 -n newest-cni-318222
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (37.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-p4zb5" [74662c6a-2ed0-4f72-bc07-6b09a7c396bf] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-695b96c756-p4zb5" [74662c6a-2ed0-4f72-bc07-6b09a7c396bf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.004104796s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-p4zb5" [74662c6a-2ed0-4f72-bc07-6b09a7c396bf] Running
E0923 13:14:10.421320  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/enable-default-cni-066078/client.crt: no such file or directory" logger="UnhandledError"
E0923 13:14:13.030120  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/false-066078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004005557s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-625814 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-625814 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-625814 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-625814 -n embed-certs-625814
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-625814 -n embed-certs-625814: exit status 2 (256.191369ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-625814 -n embed-certs-625814
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-625814 -n embed-certs-625814: exit status 2 (273.095044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-625814 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-625814 -n embed-certs-625814
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-625814 -n embed-certs-625814
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-318222 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-318222 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-318222 -n newest-cni-318222
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-318222 -n newest-cni-318222: exit status 2 (244.914083ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-318222 -n newest-cni-318222
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-318222 -n newest-cni-318222: exit status 2 (239.810359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-318222 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-318222 -n newest-cni-318222
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-318222 -n newest-cni-318222
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6pb5v" [11b578cd-248d-4059-9b8b-e34692c13b29] Running
E0923 13:14:48.094460  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/flannel-066078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00403731s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-6pb5v" [11b578cd-248d-4059-9b8b-e34692c13b29] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003974044s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-798489 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-798489 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-798489 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-798489 -n old-k8s-version-798489
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-798489 -n old-k8s-version-798489: exit status 2 (246.207788ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-798489 -n old-k8s-version-798489
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-798489 -n old-k8s-version-798489: exit status 2 (250.149067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-798489 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-798489 -n old-k8s-version-798489
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-798489 -n old-k8s-version-798489
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.30s)

                                                
                                    

Test skip (31/340)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-066078 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-066078" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-066078

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-066078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-066078"

                                                
                                                
----------------------- debugLogs end: cilium-066078 [took: 3.380668241s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-066078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-066078
--- SKIP: TestNetworkPlugins/group/cilium (3.52s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-398442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-398442
E0923 13:06:45.136820  505012 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19690-497735/.minikube/profiles/kindnet-066078/client.crt: no such file or directory" logger="UnhandledError"
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard